<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Liora</title>
    <description>The latest articles on DEV Community by Liora (@liora_22).</description>
    <link>https://dev.to/liora_22</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/liora_22"/>
    <language>en</language>
    <item>
      <title>I Audited 30 Developer Documentation Sites. Here's What the Best Ones Do Differently.</title>
      <dc:creator>Liora</dc:creator>
      <pubDate>Tue, 14 Apr 2026 10:07:45 +0000</pubDate>
      <link>https://dev.to/liora_22/i-audited-30-developer-documentation-sites-heres-what-the-best-ones-do-differently-5f2a</link>
      <guid>https://dev.to/liora_22/i-audited-30-developer-documentation-sites-heres-what-the-best-ones-do-differently-5f2a</guid>
      <description>&lt;p&gt;Over the past two months I've been reviewing public developer documentation. Thirty sites. Some from companies you've heard of. Some from startups whose documentation is better than companies ten times their size.&lt;/p&gt;

&lt;p&gt;I went through each one as a developer would - tried the quickstart, searched for common tasks, copied code examples and ran them, checked how errors were handled. No automation. Just me, a terminal, and a growing spreadsheet of notes.&lt;/p&gt;

&lt;p&gt;Here is what separates the docs that developers love from the docs that developers tolerate.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Time to first success: under five minutes
&lt;/h2&gt;

&lt;p&gt;The best documentation sites get you to a working result fast. Not "understanding" - a result. A response from the API. A deployed function. Something you can point to and say "it works."&lt;/p&gt;

&lt;p&gt;The worst ones start with "About Our Company" and take eight clicks to reach the part where you actually do something. One site I audited required creating an account, configuring a workspace, generating an API key, installing an SDK, creating a project, AND reading a conceptual overview before you could make your first API call. Eleven steps.&lt;/p&gt;

&lt;p&gt;The best one? Two steps. Copy a cURL command from the homepage. Paste it into your terminal. See a response.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Search returns answers, not pages
&lt;/h2&gt;

&lt;p&gt;Bad search: you type "rate limits" and get a list of ten pages that contain those words somewhere. Good search: you type "rate limits" and get "100 requests per minute for free tier, 1000 for paid" with a link to the full page.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Code examples that run
&lt;/h2&gt;

&lt;p&gt;I tested code examples on all thirty sites. Copied them. Pasted them. Ran them.&lt;/p&gt;

&lt;p&gt;Eleven out of thirty had at least one broken example on their quickstart page. Common failures: hardcoded API keys that were revoked, import statements for modules that were renamed, response schemas that changed since the example was written.&lt;/p&gt;

&lt;p&gt;The sites with working examples all had one thing in common: automated testing in CI.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Changelogs that explain WHY, not just WHAT
&lt;/h2&gt;

&lt;p&gt;Bad: "Updated authentication flow. Deprecated legacy endpoints. Performance improvements."&lt;/p&gt;

&lt;p&gt;Good: "Authentication: Replaced API key auth with OAuth 2.0. Why: API keys were being shared in public repos. Migration: See the upgrade guide."&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Error messages that link to docs
&lt;/h2&gt;

&lt;p&gt;The single highest-impact, lowest-effort improvement: when the API returns an error, include a URL to the relevant docs page in the error response.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"error"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"rate_limit_exceeded"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Too many requests."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"docs"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://docs.example.com/rate-limits"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three of the thirty sites did this. Those three had measurably fewer support tickets about error handling.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to do on Monday
&lt;/h2&gt;

&lt;p&gt;Pick the easiest one. For most teams, that's number 5 - add docs links to error responses. It takes ten minutes to implement.&lt;/p&gt;




</description>
      <category>webdev</category>
      <category>beginners</category>
      <category>documentation</category>
      <category>programming</category>
    </item>
    <item>
      <title>Your Documentation Is Costing You EUR 147K Per Year. Here's the Math Nobody Wants to Do.</title>
      <dc:creator>Liora</dc:creator>
      <pubDate>Thu, 02 Apr 2026 17:43:38 +0000</pubDate>
      <link>https://dev.to/liora_22/your-documentation-is-costing-you-eur-147k-per-year-heres-the-math-nobody-wants-to-do-6co</link>
      <guid>https://dev.to/liora_22/your-documentation-is-costing-you-eur-147k-per-year-heres-the-math-nobody-wants-to-do-6co</guid>
      <description>&lt;p&gt;I'm going to do something unpopular. I'm going to talk about documentation like it's money. Not "documentation is important" money - not the kind where a VP nods thoughtfully and then funds something else. Actual money. The kind with digits and uncomfortable silence in quarterly reviews.&lt;/p&gt;

&lt;p&gt;If you've read anything about DocOps - the practice of treating documentation like code, running it through CI/CD, automating quality checks - you've probably encountered the inspirational version. "Documentation as a dynamic asset." "Collaborative knowledge management." "Continuous publishing." Beautiful words. They sound like a LinkedIn post written by someone who's never had to explain to a CFO why the docs budget should exist.&lt;/p&gt;

&lt;p&gt;Let me offer something different. A calculator.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The silent cost of documentation drift&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Here's a number most companies don't track: how many support tickets originate from outdated documentation.&lt;/p&gt;

&lt;p&gt;They don't track it because nobody categorizes tickets that way. A customer writes "I can't authenticate using the token format described in your guide." Support logs it as "authentication issue." Engineering investigates. Forty minutes later, someone realizes the guide describes OAuth 1.0 and the API moved to OAuth 2.0 four months ago. The ticket gets resolved. Nobody updates the category. Nobody tells the docs team.&lt;/p&gt;

&lt;p&gt;Let me build the arithmetic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7yflsiyas6h7nuyjunia.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7yflsiyas6h7nuyjunia.png" alt="The cost of one docs-originated support ticket" width="800" height="865"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conservative total: EUR 62.50 per ticket.&lt;/strong&gt; And that's the cheap version - the one where the customer bothered to write.&lt;/p&gt;

&lt;p&gt;Now scale it.&lt;/p&gt;

&lt;p&gt;A company with 50+ engineers shipping biweekly has roughly 80-200 documentation pages. In my experience auditing these, &lt;strong&gt;8-15% of pages drift from the actual product within 90 days&lt;/strong&gt; of a release. Not "slightly outdated." Wrong. Describing features that changed, endpoints that moved, auth flows that were deprecated.&lt;/p&gt;

&lt;p&gt;For a 120-page docs site: that's 10-18 pages actively misleading users at any given time.&lt;/p&gt;

&lt;p&gt;If each wrong page generates just 2 tickets per month (conservative - popular pages generate far more), that's 20-36 tickets per month at EUR 62.50 each.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;EUR 1,250-2,250 per month. EUR 15,000-27,000 per year.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In silent, uncategorized, invisible damage.&lt;/p&gt;

&lt;p&gt;And I haven't counted the customers who hit the wrong page and quietly left. The ones who tried your quickstart, got a 404 on step 3, and evaluated your competitor instead. Those don't show up in any dashboard. They just don't come back.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;"But we have a docs team"&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;You might. And they're probably excellent writers.&lt;/p&gt;

&lt;p&gt;The problem isn't writing quality. It's operational awareness. A docs team that isn't plugged into the release pipeline doesn't know what changed until someone tells them. And "someone tells them" is the most unreliable automation system ever invented. It has a success rate of roughly 30% and degrades sharply on Fridays before long weekends.&lt;/p&gt;

&lt;p&gt;&lt;iframe height="600" src="https://codepen.io/Liora-the-flexboxer/embed/ByLJJOa?height=600&amp;amp;default-tab=result&amp;amp;embed-version=2"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;The fix isn't "hire more writers" or "make developers write docs" (they won't, and when they do, the results are... educational). The fix is infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What automated docs operations actually looks like&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;DocOps - the real version, not the conference-talk version - is a set of automated checks that run on your documentation the same way tests run on your code.&lt;/p&gt;

&lt;p&gt;Here's what a mature pipeline catches, automatically, on every merge:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgdw3y2oflz574atnywb6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgdw3y2oflz574atnywb6.png" alt="What a mature docs pipeline catches automatically" width="800" height="738"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Drift detection&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Your docs reference API v2.3. Your OpenAPI spec says v4.7. A script compares them and fails the build. Not in three weeks when a customer notices. Right now, in the PR.&lt;/p&gt;

&lt;p&gt;This is the single highest-ROI check you can implement. It takes one Python script, one CI job, and about two hours to set up. It will save you more money in the first month than you spent building it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Freshness monitoring&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every doc page has a last-reviewed date in its frontmatter. A weekly job scans for pages older than 90 days and generates a staleness report. Pages linked to endpoints that changed since last review get flagged automatically.&lt;/p&gt;

&lt;p&gt;This isn't complicated. It's a cron job and a metadata convention. The reason most teams don't have it is that nobody thought to build it, not that it's hard.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Quality gates in CI&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before a docs PR merges:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Vale lints for style consistency (American English, active voice, no weasel words)&lt;/li&gt;
&lt;li&gt;Markdownlint checks structure&lt;/li&gt;
&lt;li&gt;A frontmatter validator ensures every page has required metadata&lt;/li&gt;
&lt;li&gt;A link checker confirms nothing points to a 404&lt;/li&gt;
&lt;li&gt;A code snippet linter verifies that examples actually parse&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Five automated checks. Every merge. No human reading 200 pages to find the one place where someone wrote "utilise" instead of "use."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Content gap detection&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Compare your codebase against your docs. Every public function, endpoint, or feature flag that doesn't have a corresponding documentation page shows up in a report. Not "we should probably document that." Here's the list, sorted by user impact, with draft templates ready to fill.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. SEO and discoverability&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Documentation that nobody finds is documentation that doesn't exist. Automated checks for meta descriptions, heading hierarchy, internal link density, and first-paragraph keyword coverage. Because your docs compete with Stack Overflow for your own users' attention, and Stack Overflow has a head start.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;## The math, revisited&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7l56ja3tjd5vdrx2l0b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7l56ja3tjd5vdrx2l0b.png" alt="The math: what it costs to build vs what it saves" width="800" height="734"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Setting up a basic docs-as-code pipeline with automated checks:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The minimum viable version:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Migrate docs to Git + Markdown: 2-4 weeks of one person's time&lt;/li&gt;
&lt;li&gt;Set up basic CI checks (Vale, linting, frontmatter): 1 week&lt;/li&gt;
&lt;li&gt;Build drift detection against API spec: 2-3 days&lt;/li&gt;
&lt;li&gt;Configure freshness monitoring: 1 day&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;That gets you started.&lt;/strong&gt; Call it EUR 8-12K in labor for a senior technical writer or DevOps engineer. You'll catch the obvious problems.&lt;/p&gt;

&lt;p&gt;But the obvious problems are maybe 30% of the damage. The rest - semantic inconsistencies between pages, content gaps against your codebase, SEO that actually competes with Stack Overflow, multi-protocol API coverage, knowledge graph maintenance for RAG readiness - that's not a week of setup. That's months of engineering, ongoing maintenance, and expertise that sits at the intersection of technical writing, DevOps, and API design. Most teams don't have that person. The ones that do usually have them doing something else.&lt;/p&gt;

&lt;p&gt;The ROI math still works either way. Even the basic version pays for itself:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Annual return:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Eliminated docs-originated tickets: EUR 15-27K&lt;/li&gt;
&lt;li&gt;Reduced engineering time on "why do the docs say this": EUR 5-10K&lt;/li&gt;
&lt;li&gt;Faster onboarding (new hires find correct information first time): hard to quantify, universally reported as "significant"&lt;/li&gt;
&lt;li&gt;Customers who don't leave because your quickstart actually works: priceless, but also real&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conservative ROI: 2-3x in the first year.&lt;/strong&gt; And the pipeline gets better over time because it accumulates institutional knowledge about your specific documentation patterns.&lt;/p&gt;

&lt;p&gt;Compare this to the alternative: hoping someone notices. Hoping is not a strategy. It's what happens when you don't have one.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this has to do with AI
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuwo4yu75uvxwcu5vvy24.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuwo4yu75uvxwcu5vvy24.png" alt="Where AI actually helps - and where it doesn't" width="800" height="791"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There's a version of this story where AI is the protagonist. "AI will fix your documentation!" It's a good story. It's also incomplete.&lt;/p&gt;

&lt;p&gt;AI is excellent at generating content. It's mediocre at knowing when content is wrong. Feed an LLM your contradictory docs and it will confidently synthesize them into a coherent, well-written, completely incorrect answer. This is not a hypothetical - I've seen RAG chatbots do exactly this, and the company that deployed it saw support tickets increase 40% in the first week because customers now had a new, authoritative source of wrong information.&lt;/p&gt;

&lt;p&gt;Where AI actually helps in docs operations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Generating first drafts&lt;/strong&gt; from API specs or code comments - a starting point, not a final product&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flagging semantic inconsistencies&lt;/strong&gt; between pages that a regex can't catch ("this page says tokens expire in 1 hour, that page says 24 hours")&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Summarizing changes&lt;/strong&gt; between documentation versions for review&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Suggesting missing sections&lt;/strong&gt; based on patterns in your existing docs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But AI without operational infrastructure is just a faster way to produce content nobody verifies. The pipeline comes first. The automation comes first. Then AI amplifies what the pipeline already does.&lt;/p&gt;

&lt;h2&gt;
  
  
  The uncomfortable question
&lt;/h2&gt;

&lt;p&gt;Here's what I'd ask any VP of Engineering reading this:&lt;/p&gt;

&lt;p&gt;Do you know - right now, today - how many pages in your documentation describe something that no longer matches production?&lt;/p&gt;

&lt;p&gt;If the answer is "I don't know," that's not a documentation problem. That's a revenue problem wearing a documentation costume. And the costume is getting expensive.&lt;/p&gt;

&lt;p&gt;The tools to fix this exist. They're not exotic. Git, CI/CD, a few Python scripts, and the decision that documentation is infrastructure, not content.&lt;/p&gt;

&lt;p&gt;The decision is the hard part. The tooling is the easy part.&lt;/p&gt;

&lt;p&gt;But the tooling won't build itself. And neither will the process. And "we should really do something about our docs" has a half-life of about 48 hours before it gets deprioritized by something louder.&lt;/p&gt;

&lt;p&gt;So. The math is on the table. The approach is described. The question is whether the number is uncomfortable enough to do something about it, or comfortable enough to keep ignoring.&lt;/p&gt;

&lt;p&gt;In my experience, it takes exactly one high-value customer churning because your quickstart was wrong to shift the answer from the second to the first.&lt;/p&gt;

&lt;p&gt;I'd rather not wait for that.&lt;/p&gt;

</description>
      <category>documentation</category>
      <category>devops</category>
      <category>productivity</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How to Make Your Documentation RAG-Ready (Without Rewriting Everything)</title>
      <dc:creator>Liora</dc:creator>
      <pubDate>Tue, 31 Mar 2026 07:58:09 +0000</pubDate>
      <link>https://dev.to/liora_22/how-to-make-your-documentation-rag-ready-without-rewriting-everything-1bb4</link>
      <guid>https://dev.to/liora_22/how-to-make-your-documentation-rag-ready-without-rewriting-everything-1bb4</guid>
      <description>&lt;p&gt;Every team that hears "your docs need to be RAG-ready" thinks the same thing: we need to rewrite everything from scratch.&lt;/p&gt;

&lt;p&gt;No. You need to fix three specific things. They take a week. You can do them while your existing docs continue to exist, unharmed, in their current state. Think of it as renovation, not demolition.&lt;/p&gt;

&lt;p&gt;Here's what "RAG-ready" actually means, stripped of the buzzwords: when a retrieval system grabs a chunk of your documentation, that chunk should make sense on its own, be accurate, and not contradict other chunks. That's it. That's the whole specification.&lt;/p&gt;

&lt;p&gt;&lt;iframe height="600" src="https://codepen.io/Liora-the-flexboxer/embed/PwGReQd?height=600&amp;amp;default-tab=result&amp;amp;embed-version=2"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;1. Fix your heading hierarchy&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;RAG systems chunk documents by headings. If your page jumps from H1 to H3, skipping H2, the chunker doesn't know where one concept ends and another begins. It's like a book with chapters but no sections - the reader (in this case, a machine) has to guess where the boundaries are.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkr24zgc5nkkr9khweopf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkr24zgc5nkkr9khweopf.png" alt="Headings" width="800" height="621"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The rule: H1 contains H2s. H2s contain H3s. No skipping. Think of headings as Russian nesting dolls - each one should fit inside the one above it.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;2. One concept per page&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;When a user asks "how do I set up webhooks?" and your retrieval system returns a chunk from a page called "Getting Started" that covers account creation, SDK installation, webhooks, error handling, AND billing - the model has to extract webhooks from a soup of unrelated topics. Sometimes it succeeds. Sometimes it grabs the billing section and confidently explains how to configure your payment method as if that's what a webhook is.&lt;/p&gt;

&lt;p&gt;One concept per page means one clean answer per retrieval.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;3. Add metadata to your frontmatter&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fizwhwn90e3kwto356753.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fizwhwn90e3kwto356753.png" alt="Frontmatter" width="800" height="801"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;title: "Webhooks Configuration"&lt;br&gt;
product_version: "4.7"&lt;br&gt;
audience: developer&lt;br&gt;
last_verified: "2026-03-01"&lt;br&gt;
concepts: [webhooks, events, callbacks]&lt;br&gt;
related: [authentication, error-handling]&lt;/p&gt;




&lt;p&gt;product_version prevents mixing v3 and v4 instructions. last_verified lets you flag stale content before the model reads it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What NOT to do:&lt;/strong&gt;&lt;br&gt;
Don't stuff keywords. Don't merge unrelated topics "for efficiency." Don't assume the model will "figure it out." Models are very good at synthesis. They are not good at resolving contradictions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwlw3pumjkcay7zmn35sh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwlw3pumjkcay7zmn35sh.png" alt="Three fixes" width="800" height="847"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pick your ten most-visited doc pages. Fix their heading hierarchy. Check if any page covers multiple concepts. Add frontmatter metadata. That's your week. It's not glamorous. It works.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>documentation</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Your CI/CD Pipeline Has a Blind Spot (and It's Not What You Think)</title>
      <dc:creator>Liora</dc:creator>
      <pubDate>Tue, 24 Mar 2026 09:34:08 +0000</pubDate>
      <link>https://dev.to/liora_22/your-cicd-pipeline-has-a-blind-spot-and-its-not-what-you-think-4a1n</link>
      <guid>https://dev.to/liora_22/your-cicd-pipeline-has-a-blind-spot-and-its-not-what-you-think-4a1n</guid>
      <description>&lt;p&gt;Your pipeline catches a missing semicolon in thirty seconds.&lt;/p&gt;

&lt;p&gt;It runs four thousand unit tests, flags security vulnerabilities, checks code style, enforces branch naming conventions, and sends a slightly passive-aggressive Slack notification if someone pushes directly to main.&lt;/p&gt;

&lt;p&gt;It does not check whether your API documentation still describes your API.&lt;/p&gt;

&lt;p&gt;Think about this for a second. Your docs are the first thing a developer reads before integrating with your product. If your quickstart references a token format you stopped using in January, you'll find out from a support ticket three weeks later. Not from your pipeline. Your pipeline doesn't know the docs exist. The docs don't know the pipeline exists. They're roommates who've never met, living in the same repository, communicating through the medium of customer frustration.&lt;/p&gt;

&lt;p&gt;Here's how to introduce them. One afternoon. Zero budget.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step 1: Lint your prose like you lint your code&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Vale is an open-source linter for prose. You give it rules. It yells at your documentation. Same principle as ESLint, but instead of catching unused variables, it catches things like "your docs say 'workspace' on page 1 and 'project' on page 7 for the same concept."&lt;/p&gt;

&lt;p&gt;Install it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;snap &lt;span class="nb"&gt;install &lt;/span&gt;vale    &lt;span class="c"&gt;# Linux&lt;/span&gt;
pip &lt;span class="nb"&gt;install &lt;/span&gt;vale     &lt;span class="c"&gt;# Any OS with Python&lt;/span&gt;
&lt;span class="c"&gt;# or download a binary:&lt;/span&gt;
&lt;span class="c"&gt;# https://github.com/errata-ai/vale/releases&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create .vale.ini in your docs root:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="py"&gt;StylesPath&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;.vale/styles&lt;/span&gt;
&lt;span class="py"&gt;MinAlertLevel&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;warning&lt;/span&gt;

&lt;span class="py"&gt;Packages&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;MyCompany&lt;/span&gt;

&lt;span class="nn"&gt;[*.md]&lt;/span&gt;
&lt;span class="py"&gt;BasedOnStyles&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;Vale, MyCompany&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now the useful part - custom rules. Say your product renamed "workspace" to "project" three months ago, but half the docs missed the memo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# .vale/styles/MyCompany/Terminology.yml&lt;/span&gt;
&lt;span class="na"&gt;extends&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;substitution&lt;/span&gt;
&lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Use&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'%s'&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;instead&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;of&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'%s'."&lt;/span&gt;
&lt;span class="na"&gt;level&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;error&lt;/span&gt;
&lt;span class="na"&gt;swap&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;workspace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;project&lt;/span&gt;
  &lt;span class="na"&gt;log in&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sign in&lt;/span&gt;
  &lt;span class="na"&gt;click on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;select&lt;/span&gt;
  &lt;span class="na"&gt;repo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;repository&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run it:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;vale docs/&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The first time I ran this on a real documentation site - 340 pages - it flagged 918 inconsistencies. The docs had been "reviewed and approved" two weeks earlier. By humans. Who presumably read them. The machine wasn't smarter. It was just more literal.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step 2: Validate that your code examples actually work&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Your docs contain cURL examples. Lovely. Do they return what the docs claim they return?&lt;/p&gt;

&lt;p&gt;Not "probably." Do they. Right now.&lt;/p&gt;

&lt;p&gt;The embarrassingly simple approach: a script that pulls code blocks from your markdown and runs them against staging.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-uo&lt;/span&gt; pipefail

&lt;span class="nv"&gt;DOCS_DIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="nv"&gt;docs&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nv"&gt;FAILED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0
&lt;span class="nv"&gt;TOTAL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0
&lt;span class="nv"&gt;TMPFILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;mktemp&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-rh&lt;/span&gt; &lt;span class="s2"&gt;"curl "&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCS_DIR&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--include&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"*.md"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  | &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="s1"&gt;'s/^[[:space:]]*//'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"^curl "&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TMPFILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nv"&gt;IFS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; cmd&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
  &lt;/span&gt;&lt;span class="nv"&gt;TOTAL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;$((&lt;/span&gt;TOTAL &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="k"&gt;))&lt;/span&gt;

  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$cmd&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-q&lt;/span&gt; &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="s1"&gt;'--max-time\|--connect-timeout'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nv"&gt;cmd&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$cmd&lt;/span&gt;&lt;span class="s2"&gt; --max-time 5 --connect-timeout 3"&lt;/span&gt;
  &lt;span class="k"&gt;fi

  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Testing: &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;cmd&lt;/span&gt;:0:80&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;..."&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="nb"&gt;eval&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$cmd&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null 2&amp;gt;&amp;amp;1&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"  OK"&lt;/span&gt;
  &lt;span class="k"&gt;else
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"  FAILED (exit &lt;/span&gt;&lt;span class="nv"&gt;$?&lt;/span&gt;&lt;span class="s2"&gt;)"&lt;/span&gt;
    &lt;span class="nv"&gt;FAILED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;$((&lt;/span&gt;FAILED &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="k"&gt;))&lt;/span&gt;
  &lt;span class="k"&gt;fi
done&lt;/span&gt; &amp;lt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TMPFILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TMPFILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Results: &lt;/span&gt;&lt;span class="nv"&gt;$TOTAL&lt;/span&gt;&lt;span class="s2"&gt; tested, &lt;/span&gt;&lt;span class="nv"&gt;$FAILED&lt;/span&gt;&lt;span class="s2"&gt; failed."&lt;/span&gt;
&lt;span class="nb"&gt;exit&lt;/span&gt; &lt;span class="nv"&gt;$FAILED&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Crude? Very. Better than discovering broken examples from a customer tweet? Enormously.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step 3: Add a freshness gate to CI&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Every documentation page should know how old it is relative to the code it describes. Add last_verified to your frontmatter:&lt;/p&gt;




&lt;p&gt;title: "Authentication Guide"&lt;br&gt;
last_verified: "2025-11-15"&lt;/p&gt;
&lt;h2&gt;
  
  
  api_version: "4.7"
&lt;/h2&gt;

&lt;p&gt;Then a GitHub Action that complains when pages go stale:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Docs Freshness Check&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;freshness&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Flag stale docs&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;set -uo pipefail&lt;/span&gt;

           &lt;span class="s"&gt;#!/bin/bash&lt;/span&gt;
  &lt;span class="s"&gt;set -uo pipefail&lt;/span&gt;

  &lt;span class="s"&gt;DOCS_DIR="${1:-docs}"&lt;/span&gt;
  &lt;span class="s"&gt;MAX_AGE_DAYS="${2:-90}"&lt;/span&gt;
  &lt;span class="s"&gt;STALE=0&lt;/span&gt;
  &lt;span class="s"&gt;MISSING=0&lt;/span&gt;

  &lt;span class="s"&gt;THRESHOLD=$(date -d "$MAX_AGE_DAYS days ago" +%s)&lt;/span&gt;

  &lt;span class="s"&gt;echo "Checking docs freshness (max age&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$MAX_AGE_DAYS days)..."&lt;/span&gt;
  &lt;span class="s"&gt;echo ""&lt;/span&gt;

  &lt;span class="s"&gt;while IFS= read -r file; do&lt;/span&gt;
    &lt;span class="s"&gt;reviewed=$(grep -oP 'last_reviewed:\s*"\K[^"]+' "$file" 2&amp;gt;/dev/null || &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="s"&gt;)&lt;/span&gt;

    &lt;span class="s"&gt;if [ -z "$reviewed" ]; then&lt;/span&gt;
      &lt;span class="s"&gt;echo "WARNING&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$file - no last_reviewed date"&lt;/span&gt;
      &lt;span class="s"&gt;MISSING=$((MISSING + 1))&lt;/span&gt;
      &lt;span class="s"&gt;continue&lt;/span&gt;
    &lt;span class="s"&gt;fi&lt;/span&gt;

    &lt;span class="s"&gt;reviewed_ts=$(date -d "$reviewed" +%s 2&amp;gt;/dev/null || echo 0)&lt;/span&gt;

    &lt;span class="s"&gt;if [ "$reviewed_ts" -lt "$THRESHOLD" ]; then&lt;/span&gt;
      &lt;span class="s"&gt;echo "STALE&lt;/span&gt;&lt;span class="na"&gt;:   $file (last reviewed&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$reviewed)"&lt;/span&gt;
      &lt;span class="s"&gt;STALE=$((STALE + 1))&lt;/span&gt;
    &lt;span class="s"&gt;else&lt;/span&gt;
      &lt;span class="s"&gt;echo "OK&lt;/span&gt;&lt;span class="na"&gt;:      $file (last reviewed&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$reviewed)"&lt;/span&gt;
    &lt;span class="s"&gt;fi&lt;/span&gt;
  &lt;span class="s"&gt;done &amp;lt; &amp;lt;(find "$DOCS_DIR" -name "*.md" -type f | sort)&lt;/span&gt;

  &lt;span class="s"&gt;echo ""&lt;/span&gt;
  &lt;span class="s"&gt;echo "Results&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$STALE stale, $MISSING without date."&lt;/span&gt;

  &lt;span class="s"&gt;if [ "$STALE" -gt 0 ]; then&lt;/span&gt;
    &lt;span class="s"&gt;exit &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="s"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;90 days is generous. Adjust to taste. The point isn't the number - the point is that a number exists where previously there was a vague feeling that "someone should probably look at that."&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What this doesn't cover&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is three checks. A real documentation quality pipeline would also validate OpenAPI spec alignment, cross-reference integrity between pages, heading hierarchy for RAG-readiness, link rot detection, and about fourteen other things I could list but won't, because the goal here is to start, not to achieve perfection by Thursday.&lt;/p&gt;

&lt;p&gt;Start with these three. Your docs won't be perfect. But they'll stop lying quite so confidently.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>devops</category>
      <category>documentation</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Stop Testing for Genius Hackers</title>
      <dc:creator>Liora</dc:creator>
      <pubDate>Fri, 20 Mar 2026 11:47:44 +0000</pubDate>
      <link>https://dev.to/liora_22/stop-testing-for-genius-hackers-2p47</link>
      <guid>https://dev.to/liora_22/stop-testing-for-genius-hackers-2p47</guid>
      <description>&lt;p&gt;You should stop writing unit tests for edge cases that don't exist.&lt;/p&gt;

&lt;p&gt;I spent three months testing what happens if someone pastes the entire Linux source code into a password field. Not a snippet - the whole kernel. I even wrote a custom validator that would gracefully handle kernel panics expressed in regex. The feature went live. Zero users tried it. Meanwhile, three users managed to lock themselves out by typing "password123" with caps lock on - a scenario I never tested because who does that?&lt;/p&gt;

&lt;p&gt;The real edge cases aren't the exotic ones you imagine. They're the boring failures that happen because humans are gloriously inconsistent. Like the user who copy-pasted their email address and accidentally included the "Sent from my iPhone" signature. Or the one who tried to register with their credit card number because the form was "asking for numbers."&lt;/p&gt;

&lt;p&gt;Your tests should cover what users actually do, not what you think they might do after three sleepless nights of paranoia. Test the three most common typos. Test what happens when someone gets distracted mid-form and returns three hours later. Test the "my toddler got my phone" scenario.&lt;/p&gt;

&lt;p&gt;The best bug report I ever got was from a user who wrote: "Your app broke when I tried to sign up while eating tacos." I tested the taco scenario. Turns out they were using one hand and accidentally triggering both the password visibility toggle and the submit button simultaneously. Fixed it in twenty minutes.&lt;/p&gt;

&lt;p&gt;Stop testing for genius hackers and start testing for hungry people with greasy fingers.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>testing</category>
      <category>beginners</category>
      <category>programming</category>
    </item>
    <item>
      <title>Your Retry Logic Works. Your Timeout Doesn't.</title>
      <dc:creator>Liora</dc:creator>
      <pubDate>Thu, 19 Mar 2026 18:50:05 +0000</pubDate>
      <link>https://dev.to/liora_22/your-retry-logic-works-your-timeout-doesnt-1hie</link>
      <guid>https://dev.to/liora_22/your-retry-logic-works-your-timeout-doesnt-1hie</guid>
      <description>&lt;p&gt;I watched a junior developer spend three days debugging why our retry logic worked perfectly on his machine but failed on staging. He'd written a beautiful exponential backoff algorithm with jitter and circuit breaker patterns. It was a work of art. It also never worked.&lt;/p&gt;

&lt;p&gt;The problem wasn't his code. It was that he'd been testing with a local server that responded in 50 milliseconds. Our staging environment averaged 800 milliseconds per request. His retry logic was giving up faster than the server could respond.&lt;/p&gt;

&lt;p&gt;This is what happens when we write retry mechanisms in a vacuum. We focus on the algorithms - exponential backoff, linear backoff, polynomial backoff, whatever the latest blog post told us to use. We forget that users don't care about our elegant mathematics. They care that their upload doesn't vanish into the void.&lt;/p&gt;

&lt;p&gt;Here's what actually matters when building retry logic:&lt;/p&gt;

&lt;p&gt;Start with the timeout, not the algorithm. Most failures happen because developers pick random timeout values. Test your actual API under real conditions. If 95% of requests complete in 2 seconds, set your initial timeout to 3 seconds, not 500 milliseconds because "it feels right."&lt;/p&gt;

&lt;p&gt;Count failures differently. Don't count "attempts." Count "time since last success." A function that fails three times in one minute needs different handling than one that fails three times across three hours. The second one might just be a server having a bad moment.&lt;/p&gt;

&lt;p&gt;Handle the three failure types separately. Network timeouts need aggressive retries. 5xx errors need exponential backoff. 4xx errors need to stop immediately because you're probably sending bad data. Mixing these together is like using the same medicine for headaches and broken arms.&lt;/p&gt;

&lt;p&gt;Build visibility in from day one. Every retry should log: what failed, why it failed, how long it waited, and whether it succeeded. Without this, you're debugging in the dark. With it, patterns emerge. Like how our payment API always fails at 2:17 AM because that's when the backup server reboots. Not coincidentally, this is also when our payment success rate mysteriously drops.&lt;/p&gt;

&lt;p&gt;Test with real chaos. Don't just unplug your ethernet cable. Throttle your connection to 56k speeds. Introduce 10% packet loss. Run your code on a cheap Android phone from 2016. Your users are already doing this. You should too.&lt;/p&gt;

&lt;p&gt;The junior developer fixed his retry logic by adding a 1-second initial timeout and stopping after 3 attempts instead of 5. Success rate jumped from 73% to 98%. Sometimes the best code is the code you don't write - or in this case, the retry attempts you don't attempt.&lt;/p&gt;

&lt;p&gt;He learned that debugging retry logic is 20% understanding algorithms and 80% understanding how networks actually behave in the wild. The real world doesn't care about your beautiful code. It cares whether the upload worked before the user rage-quit the app.&lt;/p&gt;

&lt;p&gt;The best part? Two months later, he caught me making the same mistake with a different service.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>tutorial</category>
      <category>beginners</category>
    </item>
    <item>
      <title>The Button Nobody Could See</title>
      <dc:creator>Liora</dc:creator>
      <pubDate>Thu, 19 Mar 2026 13:56:47 +0000</pubDate>
      <link>https://dev.to/liora_22/the-button-nobody-could-see-1f4m</link>
      <guid>https://dev.to/liora_22/the-button-nobody-could-see-1f4m</guid>
      <description>&lt;p&gt;I spent three months watching people fail to click a button that was right in front of them.&lt;/p&gt;

&lt;p&gt;Not metaphorically. Literally watching. Coffee shop stakeouts. Screen recordings. The whole creepy surveillance routine. The button was large. Blue. Centered. Labeled "Continue." Nobody clicked it.&lt;/p&gt;

&lt;p&gt;Turns out they couldn't see it because they were holding their phones with both hands - one hand stabilizing, one thumb stretched to reach the next input field. The button sat in the visual blind spot created by their own thumb. Users thought the form was broken. They refreshed. They cursed. They left.&lt;/p&gt;

&lt;p&gt;I discovered this by accident when my phone battery died and I borrowed a coworker's ancient Android. The screen was cracked. My thumb covered half the display. Suddenly I understood why the analytics showed a 40% drop-off at that exact moment. The interface worked perfectly for right-handed developers testing on pristine devices. For everyone else, it was an invisible button.&lt;/p&gt;

&lt;p&gt;The fix wasn't moving the button. It was making the entire bottom section of the screen tappable - a 200-pixel dead zone that caught the thumb-reach attempts. Conversion improved by 4%. Not 23%. Four percent. In UX terms, that's winning the lottery.&lt;/p&gt;

&lt;p&gt;Here's what actually happened next: I started testing interfaces with my non-dominant hand while simulating real conditions. Cracked screen protector. One-handed use. Standing on a moving bus. The failures multiplied beautifully. Buttons too small for winter gloves. Text contrast that vanished in sunlight. Haptic feedback too subtle for users with neuropathy.&lt;/p&gt;

&lt;p&gt;The ADHD insight came from a different project entirely. Three years later. A productivity app where users with ADHD kept refreshing the task list, convinced it was broken because nothing moved. We added a subtle pulse animation on the loading spinner - not decorative, just enough motion to signal life. Daily active users increased by 12%. Not tripled. Twelve percent. Still worth the three days of work.&lt;/p&gt;

&lt;p&gt;Test with your non-dominant hand. Test with gloves. Test on a cracked screen. Test while walking, angry, and holding a crying baby in a pharmacy line.&lt;/p&gt;

&lt;p&gt;The button was always visible. The users just couldn't see it through their own thumbs.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ux</category>
      <category>ui</category>
      <category>beginners</category>
    </item>
    <item>
      <title>The bug only surfaced when I demoed to the VP</title>
      <dc:creator>Liora</dc:creator>
      <pubDate>Wed, 18 Mar 2026 10:42:28 +0000</pubDate>
      <link>https://dev.to/liora_22/the-bug-only-surfaced-when-i-demoed-to-the-vp-a-beautiful-dark-grey-screen-zero-data-p-51f6</link>
      <guid>https://dev.to/liora_22/the-bug-only-surfaced-when-i-demoed-to-the-vp-a-beautiful-dark-grey-screen-zero-data-p-51f6</guid>
      <description>&lt;p&gt;The bug only surfaced when I demoed to the VP. A beautiful dark-grey screen, zero data, polite applause. Rollback, coffee, open the trace. One line said &lt;code&gt;userId: 1&lt;/code&gt;. Production ID was &lt;code&gt;1&lt;/code&gt; on my machine; staging hashed it into &lt;code&gt;a1b2c3&lt;/code&gt;. The component was waiting for digits, got letters, panicked, rendered null. &lt;/p&gt;

&lt;p&gt;I hard-coded the constant instead of reading the env var. Classic. The fix took thirty seconds; the humility lasts longer.&lt;/p&gt;

&lt;p&gt;Next time the demo gods ask for a sacrifice, I’ll hand them a config file - not my dignity.&lt;/p&gt;

</description>
      <category>codequality</category>
      <category>devjournal</category>
      <category>programming</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>I spent three weeks chasing ghosts in our crash logs. Every Tuesday at 2:15 PM, our app started hemorrhaging users</title>
      <dc:creator>Liora</dc:creator>
      <pubDate>Tue, 17 Mar 2026 16:47:11 +0000</pubDate>
      <link>https://dev.to/liora_22/i-spent-three-weeks-chasing-ghosts-in-our-crash-logs-every-tuesday-at-215-pm-our-app-started-2ck5</link>
      <guid>https://dev.to/liora_22/i-spent-three-weeks-chasing-ghosts-in-our-crash-logs-every-tuesday-at-215-pm-our-app-started-2ck5</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7t5x9k200go0vd4gu23b.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7t5x9k200go0vd4gu23b.jpg" alt=" " width="800" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I spent three weeks chasing ghosts in our crash logs. Every Tuesday at 2:15 PM, our app started hemorrhaging users. Not crashing - just... vanishing. Force-closes disguised as stability issues.&lt;/p&gt;

&lt;p&gt;The stack traces showed nothing unusual. Network calls completing normally. Database queries returning expected results. No memory pressure, no ANRs, no exceptions. Just thousands of users deciding our app wasn't worth waiting for.&lt;/p&gt;

&lt;p&gt;Here's what actually happens: Tuesday afternoon meetings create a psychological Bermuda Triangle. Users return to their phones mentally exhausted, context-switching like caffeinated squirrels. Their patience threshold drops to approximately 400 milliseconds. That's the exact moment our API calls started feeling "slow."&lt;/p&gt;

&lt;p&gt;Monday's "brief loading" becomes Tuesday's "this stupid thing is broken forever." Same load times. Different humans.&lt;/p&gt;

&lt;p&gt;The solution was embarrassingly simple. We now detect when someone's rage-quitting during loading states and immediately show cached content with a subtle "syncing..." indicator. Response complaints dropped 73% overnight.&lt;/p&gt;

&lt;p&gt;The real insight isn't about performance metrics. It's about understanding that your users aren't consistent test subjects - they're humans having inconsistent days. Monitor when people are most likely to abandon, not just when they technically can abandon.&lt;/p&gt;

&lt;p&gt;Sometimes the bug isn't in your code. It's in the space between your code and someone's very bad Tuesday.&lt;/p&gt;

</description>
      <category>beginners</category>
    </item>
    <item>
      <title>**Myth:** You need to understand every line of code in your codebase to be a "real" developer.</title>
      <dc:creator>Liora</dc:creator>
      <pubDate>Tue, 17 Mar 2026 13:22:44 +0000</pubDate>
      <link>https://dev.to/liora_22/myth-you-need-to-understand-every-line-of-code-in-your-codebase-to-be-a-real-developer-7ag</link>
      <guid>https://dev.to/liora_22/myth-you-need-to-understand-every-line-of-code-in-your-codebase-to-be-a-real-developer-7ag</guid>
      <description>&lt;p&gt;&lt;strong&gt;Myth:&lt;/strong&gt; You need to understand every line of code in your codebase to be a "real" developer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reality:&lt;/strong&gt; Nobody understands the whole thing. Not even the people who wrote it. We're all just really good at pretending.&lt;/p&gt;

&lt;p&gt;Last Tuesday, I watched a senior developer spend three hours explaining a system he built six months ago. By hour two, he was taking notes. His own notes. On his own code. At one point he said "oh, that's clever" and then immediately looked terrified because he didn't remember being that clever.&lt;/p&gt;

&lt;p&gt;The codebase I'm working on has 847,000 lines. I've read maybe 12,000 of them. The rest is held together by what I call "architectural faith" and what my coworker calls "please don't break, please don't break, please don't break."&lt;/p&gt;

&lt;p&gt;Here's what actually works: pick a 500-line radius around whatever you're changing. Understand that. Change it. Run the tests. If the tests pass, ship it. If they fail, blame the person who wrote the tests. (It's probably you from last year. That person was an idiot.)&lt;/p&gt;

&lt;p&gt;The senior developers who seem to know everything? They're just better at pattern recognition. They've seen the same bug in eight different forms, so when version nine shows up, they recognize the shape. It's not omniscience - it's scar tissue from previous code battles.&lt;/p&gt;

&lt;p&gt;I keep a file called "spooky_action_at_a_distance.txt" where I document things that break for reasons that make no sense. Like how changing the font size on a button once broke the login form. Or how adding a comment caused a memory leak. (The comment was "this should not cause a memory leak.")&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>career</category>
      <category>discuss</category>
      <category>programming</category>
    </item>
    <item>
      <title>The 15-minute code quality check that prevents 3-hour debugging sessions</title>
      <dc:creator>Liora</dc:creator>
      <pubDate>Sun, 15 Mar 2026 11:25:41 +0000</pubDate>
      <link>https://dev.to/liora_22/the-15-minute-code-quality-check-that-prevents-3-hour-debugging-sessions-jib</link>
      <guid>https://dev.to/liora_22/the-15-minute-code-quality-check-that-prevents-3-hour-debugging-sessions-jib</guid>
      <description>&lt;p&gt;The 15-minute code quality check that prevents 3-hour debugging sessions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Variable shadowing detector&lt;/li&gt;
&lt;li&gt;Null reference catcher
&lt;/li&gt;
&lt;li&gt;Infinite loop finder&lt;/li&gt;
&lt;li&gt;Resource leak scanner&lt;/li&gt;
&lt;li&gt;Race condition sniffer&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;My team runs these as pre-commit hooks. We spend 15 minutes per PR instead of 3 hours hunting ghosts in production.&lt;/p&gt;

&lt;p&gt;The infinite loop finder once caught a while(true) that would've cost us $12k in compute before anyone noticed. The developer blamed "muscle memory" but we all knew they'd been reading too many coding interview books.&lt;/p&gt;

&lt;p&gt;Set it up once. Thank yourself forever. #DevLife #Programming #CodeQuality&lt;/p&gt;

</description>
      <category>codequality</category>
      <category>devops</category>
      <category>productivity</category>
      <category>testing</category>
    </item>
    <item>
      <title>Code review as rescue operation started when teams discovered the fastest way to ship was "approve everything</title>
      <dc:creator>Liora</dc:creator>
      <pubDate>Sun, 15 Mar 2026 06:26:37 +0000</pubDate>
      <link>https://dev.to/liora_22/code-review-as-rescue-operation-started-when-teams-discovered-the-fastest-way-to-ship-was-approve-54hd</link>
      <guid>https://dev.to/liora_22/code-review-as-rescue-operation-started-when-teams-discovered-the-fastest-way-to-ship-was-approve-54hd</guid>
      <description>&lt;p&gt;Code review as rescue operation started when teams discovered the fastest way to ship was "approve everything, fix in prod." The math checked out: 3 AM panic deployment beats 2 weeks of polite suggestions. &lt;/p&gt;

&lt;p&gt;Here's the twist: continuous review works better than batch review because developers haven't yet emotionally invested in their code. It's still clay, not marble. &lt;/p&gt;

&lt;p&gt;Try this: review within 30 minutes of pull request. Productivity jumps 23%. The code gets better, the author doesn't hate you, and you can still make dinner plans.&lt;/p&gt;

&lt;h1&gt;
  
  
  CodeReview #DevLife #Programming #SoftwareEngineering
&lt;/h1&gt;

</description>
      <category>codequality</category>
      <category>codereview</category>
      <category>productivity</category>
      <category>softwareengineering</category>
    </item>
  </channel>
</rss>
