DEV Community

charley-simon
charley-simon

Posted on

Zero application code for a REST API — semantic navigation with LinkLab

Zero application code for a REST API — semantic navigation with LinkLab

The starting point

At the beginning of this project, I didn't know what storage I was going to keep.

JSON first, because that's what TMDB returns and it's the native format of all the REST APIs I was consuming. Then MongoDB, to experiment — storing JSON is natural with a NoSQL database and I wanted that experience. Then Postgres, old habit, and because normalizing data seemed like the right direction.

Every time I switched, I rewrote. Queries, routes, joins, links between resources. Not dramatic on a small project — but irritating enough to make me ask: should this really be my job?

That frustration is where LinkLab was born.


What it is not

Before going further, let me be clear about something.

If you read "abstraction over data", you probably thought ORM. Hibernate, Sequelize, ActiveRecord. I understand the reflex — and I share it. ORMs have cost me more time than they've saved. Lazy loading that blows up in production, N+1 discovered too late, generated SQL you can no longer control.

LinkLab is not an ORM. It does not map tables to objects. It does not manage migrations. It does not hide your SQL.

What it does: compile a navigation graph from your existing schema and resolve paths through it. The generated SQL is readable — you can see it live in the REPL. Nothing is hidden.

An ORM hides your data behind objects. LinkLab exposes the relations between your entities as navigable paths. That's a fundamental difference.


It's not all or nothing

LinkLab does not require you to rewrite your project.

You can point it at a single entity in your existing project, see what it finds on your real data, and decide from there. Nothing in your current code changes. If it brings value, you extend. If it doesn't fit your case, you've lost nothing.

That's the best way to evaluate it: with your own data, not with examples.


The demo — PostgreSQL

Let's take a concrete database: dvdrental, the PostgreSQL demo database many devs know. 15 tables, classic relations, nothing artificial.

Three commands:

linklab init dvdrental
# edit dvdrental.linklab.ts — set your connection details
linklab build dvdrental
linklab repl dvdrental
Enter fullscreen mode Exit fullscreen mode

The build analyzes the schema, infers relations, compiles a graph of 210 routes. No manual join configuration. No mapping to write.

linklab v0.1.0  ·  graph

① Extract      ████████████  15 tables                     1229ms
② Analyze      ████████████  1 pivot · 3 warnings             5ms
③ Dictionary   ████████████  36 relations                      3ms
④ Assemble     ████████████  15 nodes · 36 edges               3ms
⑤ Train        ████████████  12 routes trained                 4ms
⑥ Compile      ████████████  210 routes                       36ms
Enter fullscreen mode Exit fullscreen mode

In the REPL:

 dvdrental.film('Academy Dinosaur').actor
Enter fullscreen mode Exit fullscreen mode

What LinkLab generates automatically:

WITH
  step0 AS (
    SELECT DISTINCT film.* FROM film
    WHERE film.title ILIKE 'Academy Dinosaur'
  ),
  step1 AS (
    SELECT DISTINCT actor.*
    FROM actor
    INNER JOIN film_actor ON film_actor.actor_id = actor.actor_id
    INNER JOIN step0 ON step0.film_id = film_actor.film_id
  )
SELECT * FROM step1
Enter fullscreen mode Exit fullscreen mode

10 actors, 93ms. The pivot table film_actor was inferred automatically — you never mentioned it.

The result is a plain JavaScript array. In the REPL, you can call .map() directly on a Trail result:

 dvdrental.film('Academy Dinosaur').actor.map(a => a.last_name)
// ['Guiness', 'Gable', 'Tracy'...]
Enter fullscreen mode Exit fullscreen mode

In application code, await first and chain on the plain array:

const actors = await dvdrental.film('Academy Dinosaur').actor
const names = actors.map(a => a.last_name)
// ['Guiness', 'Gable', 'Tracy'...]
Enter fullscreen mode Exit fullscreen mode

Chaining .filter().map() directly on a Trail result is on the roadmap.

And chained navigation:

 dvdrental.film('Academy Dinosaur').actor.film
   film  film_actor  actor  film_actor  film
  244 results  98ms
Enter fullscreen mode Exit fullscreen mode

244 films. Two joins via film_actor, traversal in both directions. One line.

Tab completion

What makes the difference in a live demo: the Tab key.

 dvdrental.film('Academy Dinosaur').
   actor  category  language  inventory  rental  payment  store  ...
Enter fullscreen mode Exit fullscreen mode

The REPL doesn't show you all entities — it shows you the ones reachable from this context. The graph speaking in real time.


The same thing on JSON — and semantic views

Let's switch source. I built a small test project around the TMDB API — films, people, credits stored as JSON files. Not a production app, just a playground to explore what LinkLab could do with a different data source.

Same commands:

linklab build cinema
linklab repl cinema
Enter fullscreen mode Exit fullscreen mode
 cinema.movies(278).people
  13 results  14ms
Enter fullscreen mode Exit fullscreen mode

Tim Robbins, Morgan Freeman, Frank Darabont. Same navigation, same syntax — completely different source.

But what's interesting here is something else.

In this dataset, movies and people are linked by a credits table. A credit is a person + a film + a role: actor, director, writer. It's a relation toward the same entity people — differentiated by role.

LinkLab detects this at compile time and automatically generates semantic views:

 cinema.movies(278).
   people   actors   director   writers   ...
Enter fullscreen mode Exit fullscreen mode

actors, director, writers are not distinct entities in your schema. They are paths toward people, automatically filtered by role in credits.

 cinema.movies(278).actors
   Tim Robbins, Morgan Freeman, Bob Gunton...

 cinema.movies(278).director
   Frank Darabont
Enter fullscreen mode Exit fullscreen mode

What's consistent here: people('Christopher Nolan').director and directors('Christopher Nolan') resolve to the same thing — same entity, filtered by role. No separate endpoint to maintain, no duplication.

In a classic REST API, these would be separate endpoints, routes to maintain, queries to write. Here it's a graph — one path per intention.

In application code:

const films = await cinema.directors('Christopher Nolan').movies
const titles = films
  .filter(f => f.release_year > 2000)
  .map(f => f.title)
// ['Interstellar', 'Inception', 'The Dark Knight'...]
Enter fullscreen mode Exit fullscreen mode

A REST API in one command

The REPL is great for exploration. To expose:

linklab server cinema --expose-all
Enter fullscreen mode Exit fullscreen mode
LinkLab Server  ·  json:data
1532 compiled routes  ·  7 entities
URL  http://localhost:3000/api
Enter fullscreen mode Exit fullscreen mode

Immediately:

curl http://localhost:3000/api/movies/278/people
Enter fullscreen mode Exit fullscreen mode

13 people. With their _links:

{
  "id": 504,
  "name": "Tim Robbins",
  "_links": {
    "self":    { "href": "/api/movies/278/people/504" },
    "up":      { "href": "/api/movies/278" },
    "movies":  { "href": "/api/movies/278/people/504/movies" },
    "credits": { "href": "/api/movies/278/people/504/credits" }
  }
}
Enter fullscreen mode Exit fullscreen mode

Links are generated from the graph. Not configured — inferred. The client can navigate without knowing the API topology in advance. That's HATEOAS Level 3.

Zero lines of application code. Zero routes written by hand.


For production

The linklab server command is for exploration and demos. For production, you plug linklabPlugin directly into your own Fastify server, with your auth middleware, rate limiting, error handling:

import Fastify from 'fastify'
import { linklabPlugin } from '@linklab/core'

const app = Fastify()

await app.register(linklabPlugin, {
  graph: compiledGraph,
  prefix: '/api',
  dataLoader: { provider: postgresProvider },
  onEngine: (engine, req) => {
    engine.hooks.on('access.check', async (ctx) => {
      if (!req.user) return { cancelled: true, reason: 'unauthenticated' }
    })
  }
})
Enter fullscreen mode Exit fullscreen mode

What goes to production is the compiled graph and the plugin — not the CLI. The CLI is your development and exploration tool.

What about sensitive data?

By default, expose is set to 'none' — nothing in your database is accessible over HTTP without an explicit declaration. For this demo, we used --expose-all. In a real project, you list exactly what you want to expose:

export default defineConfig({
  alias: 'myproject',
  source: { ... },
  expose: { include: ['film', 'actor', 'category'] }
})
Enter fullscreen mode Exit fullscreen mode

Sensitive entities — users, payments, staff — stay invisible. expose controls the surface, your access.check hooks control per-user rights. Two separate concerns, both explicit.


There's a lot more to say

This article shows the surface. What it doesn't show:

  • SQL path optimization via Dijkstra's algorithm
  • The weight system that learns from real usage and automatically recalibrates routes
  • Real-time Trail observability via OpenTelemetry
  • The view and action framework that sits on top of the graph
  • Declarative filters in the Trail — movies.where({ release_year: { gt: 2000 } }) — under development, with a hook for cases the DSL doesn't cover natively

That will be the subject of future articles.


What LinkLab doesn't handle yet

A few cases that may cause issues today:

  • Highly atypical schemas or schemas without clear naming conventions
  • MongoDB — no driver yet, it's on the roadmap
  • Databases with hundreds of tables — the build works but the graph becomes complex to explore
  • Fine-grained per-resource authentication — possible via hooks but requires code

If you hit a case that doesn't work, that's a GitHub issue. Not a disappointment — a contribution.


Try it with your own data

That's where it gets interesting. Not with dvdrental — with your own database.

npm install -g @linklabjs/cli
linklab init myproject
# edit myproject.linklab.ts — set your connection details
linklab build myproject
linklab repl myproject
Enter fullscreen mode Exit fullscreen mode

You don't need to rewrite your project. Point LinkLab at an existing schema, explore what it finds, and make up your own mind.

The repo is on GitHub: https://github.com/charley-simon/linklab

Would you use something like this on a real project? And where do you think it would break?

Top comments (0)