Your team ships APIs every sprint. But who's checking if they actually follow REST principles?
If you've ever inherited an API that uses POST /getUsers, returns 200 for everything, or nests resources six levels deep — you know the pain. Bad API design compounds. It slows down consumers, breaks conventions, and creates tech debt that outlives the team that wrote it.
Most teams handle API design quality in one of three ways:
- Not at all — ship it and hope for the best
- Manual reviews — a senior dev eyeballs the OpenAPI spec in a PR
- Regex-based linting — tools like Spectral that pattern-match against rules
Options 1 and 2 don't scale. Option 3 sounds good until you hit its limits.
The Problem with Regex-Based API Linting
Tools like Spectral work by running JSONPath queries and regex patterns against your OpenAPI spec. This works for surface-level checks: "does this path use lowercase letters?" or "is there a description field?"
But REST design quality isn't about string patterns. It's about semantics.
Consider this endpoint:
paths:
/status:
get:
summary: Get system health status
A regex-based linter with a "use plural nouns for collections" rule will flag /status as a violation. But /status isn't a collection — it's a singleton resource. A human reviewer would know that instantly. A regex can't.
Here's a subtler one:
paths:
/users/delete:
post:
summary: Delete a user by ID
requestBody:
content:
application/json:
schema:
properties:
userId:
type: string
This is POST/GET tunneling — using POST to perform what should be a DELETE operation. A regex linter sees a valid POST endpoint and moves on. But this violates a core REST principle: HTTP methods should convey the operation's intent.
Detecting this requires reading the operation summary, understanding the intent, and comparing it against the HTTP method used. That's semantic analysis, not pattern matching.
What Semantic Analysis Actually Means
When we say "semantic analysis," we mean the linter understands context and meaning, not just structure.
Here's what that looks like in practice:
| Scenario | Regex Linter | Semantic Linter |
|---|---|---|
/status endpoint |
False positive: "not plural" | Correct: recognizes singleton |
POST /users/delete |
Misses it entirely | Catches POST/GET tunneling |
/users/{id}/orders |
Can check nesting depth | Validates hierarchical relationships make sense |
Inconsistent naming (userId vs user_id) |
Flags with regex | Understands they refer to the same concept |
The difference matters at scale. If your linter generates 30% false positives, your team ignores it. A tool that understands your API's semantics produces actionable results.
35+ Rules, 25+ Backed by Research
This isn't a "we think this is a best practice" situation. The rules come from peer-reviewed academic research on REST API design:
- Bogner et al. — empirical studies on RESTful service design
- Richardson & Ruby — REST principles applied to real-world services
- Masse — REST API design rulebook
The rules cover seven categories:
- HTTP Method Usage — GET/POST tunneling, correct status codes for operations
- Resource Naming & URI Design — plural collections, no CRUD in URIs, consistent casing
- Hierarchy & Structure — proper use of path params vs query params
- HTTP Status Codes — appropriate error codes, 401/415/406 support
- Content & Representation — caching headers, partial responses, consistent media types
- Resource Operations — pagination, ETags, Last-Modified
- Semantics & Consistency — cross-endpoint naming consistency, schema conventions
Automating This in Your Pipeline
The real value of automated API design checks isn't catching one-off mistakes — it's enforcing consistency across teams and projects over time.
Here's what a practical setup looks like:
In CI/CD (GitHub Actions)
- name: Evaluate API Design
uses: Edthing/restlens-action@v1
with:
spec-path: "openapi.yaml"
api-token: ${{ secrets.RESTLENS_API_TOKEN }}
This evaluates your spec on every PR and posts inline comments on violations. The PR fails if there are error-severity violations — just like a failing test.
In Your Editor (VS Code)
The VS Code extension evaluates on save by default, showing violations inline and color-coded by severity before you even commit. Enable evaluate-on-type for real-time feedback as you edit.
From the CLI
restlens eval openapi.yaml -p my-org/my-api
Useful for local checks or scripting into custom workflows.
With AI Assistants (MCP)
If you use Claude or other MCP-compatible assistants, REST Lens integrates directly. Your AI assistant can evaluate specs, check violations, and help fix issues in your workflow.
The Cost Argument
Enterprise API platforms (Postman, Stoplight, SwaggerHub) bundle design quality checks with full design suites, documentation hosting, and testing. That's great if you need all of it. But if you just want API design governance, you're paying for a lot you don't use.
Some ballpark annual costs for API governance:
| Tool | Annual Cost |
|---|---|
| Postman | $588 - $1,188 |
| Stoplight | $948 - $2,988 |
| SwaggerHub | $900 - $3,600 |
| REST Lens | ~€24 - €60 |
REST Lens is focused: it does API design quality and does it well. You're not paying for a visual API designer you don't need.
Getting Started
If your team works with OpenAPI specs and cares about REST design quality, here's a reasonable path:
- Start with the free tier — 3 evaluations/day, 1 project. Enough to see if the rules catch real issues in your API.
- Add it to one CI pipeline — pick your most active API project and add the GitHub Action.
- Roll out to the team — use organization-level governance to enforce rules across all projects.
If you're currently using Spectral, you can import your existing rulesets and get semantic analysis on top.
REST Lens is an API design quality platform that evaluates OpenAPI specifications using semantic analysis. It's built on 35+ rules backed by peer-reviewed research, and runs in your editor, CLI, CI/CD pipeline, or AI assistant.
Top comments (0)