A TypeScript API Starter With Sharp Boundaries — Hono + Zod + better-sqlite3
Every TypeScript backend starts with the same five or six decisions. This is one opinionated pick for each, shipped as a working bookmarks CRUD you can copy as the skeleton of your next small service. Strict types, no ORM, no framework sprawl — a starter that subtracts instead of adds.
📦 GitHub: https://github.com/sen-ltd/ts-api-starter
The problem with "TypeScript API starters"
Search for "typescript api starter" and you get two flavors, both wrong.
The first is too big. A turborepo monorepo, pnpm workspaces, three shared packages, eslint + prettier + husky + commitlint, drizzle with a migration CLI, pino + OpenTelemetry, a Dockerfile that builds five images, a GitHub Actions pipeline that deploys to Kubernetes. You wanted to write a bookmarks API, and now you're debugging your tsconfig.base.json extension chain at 1 AM.
The second is too sparse. A single index.ts with one route that returns "hello". Technically a starter, but it has nothing to show you. How do you validate input? Where do migrations live? How do you test a route handler? How does an error turn into an HTTP response? Every one of those decisions is still in front of you, and the starter didn't help.
The sweet spot is a working CRUD with sharp boundaries. Small enough that you can read it in one sitting. Complete enough that every decision has been made and you can see the consequences in the code. Opinionated enough that you can copy it as-is and start writing your own domain on top.
That's what this repo is.
The stack, and why
- Hono — web framework
- Zod — request validation
- better-sqlite3 — embedded database
- vitest — tests
No Express, no Fastify, no Prisma, no Drizzle, no Knex, no caching layer, no auth, no job queue, no config framework. Four runtime dependencies, two dev dependencies plus types, 23 tests, ~600 lines of TypeScript, ships as a 172 MB Docker image.
Let me justify each pick, because the justification is the design.
Why Hono over Express or Fastify
Express is what every Node tutorial you've ever read uses, and that's its problem. Its handler signature, (req, res, next) => {}, dates from 2010, predates async/await by years, and leaks Node's mutable-response design into every route you write. You can't return a value; you have to call res.json(...). Unhandled promise rejections don't propagate without express-async-errors or a wrapper. TypeScript types are a thin layer of fiction over a fundamentally untyped API.
Fastify fixes a lot of this and is genuinely fast, but its schema system ties you to JSON Schema + Ajv, which is awkward to share with frontends and doesn't play well with TypeScript inference. It's also not small — Fastify ships plugins, hooks, encapsulation, a logger, a serializer. Great if you want a framework; wrong if you want a starter.
Hono is different on two axes that matter for this repo.
First, the handler signature is Web Fetch API standard. Handlers take a Context and return a Response. That means tests can do this, and it just works:
const app = createApp(db);
const res = await app.request('/bookmarks', {
method: 'POST',
headers: { 'content-type': 'application/json' },
body: JSON.stringify({ url: 'https://hono.dev', title: 'Hono' }),
});
expect(res.status).toBe(201);
No server process. No supertest. No port juggling. app.request() hands the request through the same middleware chain that a real HTTP request would, and you get back a real Response object. Every one of the 23 tests in this repo uses this pattern. Zero network. Zero flake. That alone sells it.
Second, Hono is tiny. The core is measured in kilobytes. It runs on Node, Bun, Deno, Cloudflare Workers, Vercel Edge, and Lambda without code changes. For a starter, that portability is free optionality — you don't need it today, but the day you want to move this API to Workers, the code doesn't change.
Why Zod as the single validation boundary
The rule is: exactly one line in each route handler turns untyped input into typed domain objects. That line is .parse().
app.post('/', async (c) => {
const body = bookmarkCreateSchema.parse(await c.req.json());
const bookmark = insertBookmark(db, body);
c.header('Location', `/bookmarks/${bookmark.id}`);
return c.json(bookmark, 201);
});
If .parse() throws, the error middleware turns it into a 400 with the Zod issue list. If it returns, body is a BookmarkCreate — a type inferred from the schema itself via z.infer. Everything downstream, all the way to the SQL query, sees strict TypeScript. There is no "wait, did we validate that field already?" question anywhere, because the boundary is literally one line and it's the first thing the handler does.
The schema:
const urlSchema = z
.string().trim().min(1, 'url is required').max(2048).url('url must be a valid URL');
const tagSchema = z
.string().trim().min(1).max(64)
.regex(/^[a-zA-Z0-9_\-]+$/, 'tags may only contain letters, digits, underscore, hyphen');
export const bookmarkCreateSchema = z.object({
url: urlSchema,
title: z.string().trim().min(1).max(500),
tags: z.array(tagSchema).max(32).default([]),
});
export type BookmarkCreate = z.infer<typeof bookmarkCreateSchema>;
A few details worth pointing out. .trim() happens inside the parse, so route handlers never see leading/trailing whitespace. .default([]) means clients can omit tags and still get a typed string[] — no manual ?? [] anywhere else. The tag regex is a deliberate allowlist: letters, digits, underscore, hyphen. Anything else is rejected at the boundary, which means the downstream LIKE '%"${tag}"%' query is trivially safe against injection without a separate escaping step.
And crucially, the same schema file is importable from a frontend. Zod is isomorphic, so a React form can pull bookmarkCreateSchema straight from this repo, validate the form client-side, and the backend will validate exactly the same rules. One schema, two layers. This is the thing Fastify's JSON Schema approach makes hard.
Why better-sqlite3, sync API and all
The pitch for better-sqlite3 is counterintuitive: it's synchronous, and that's the feature, not the bug.
Here's the full query file for bookmarks:
export function insertBookmark(db: DB, input: BookmarkCreate): Bookmark {
const createdAt = new Date().toISOString();
const stmt = db.prepare(
'INSERT INTO bookmarks (url, title, tags, created_at) VALUES (?, ?, ?, ?)',
);
const info = stmt.run(input.url, input.title, JSON.stringify(input.tags), createdAt);
return {
id: Number(info.lastInsertRowid),
url: input.url,
title: input.title,
tags: input.tags,
createdAt,
};
}
No await. No promise chain. No try/finally to release a connection. Transactions are plain closures:
const tx = db.transaction(() => {
db.exec(sql);
insertApplied.run(file, new Date().toISOString());
});
tx();
The async alternative (pg + Prisma, or mysql2 + Drizzle) requires every function that touches the database to be async, which means every function that touches those functions is async, which means every route handler is async, which means the whole codebase lives in the async-function color. That's fine when you need a connection pool across multiple boxes, but for a single-box service with SQLite, it's paying the complexity tax for nothing. The sync API is what the hardware actually does; the async wrapper is a lie we tell to make the event loop happy.
The tradeoff is real: a 500-millisecond query will block the event loop for 500 ms, and every other request waits. For a bookmarks API this is a non-issue — the queries run in microseconds. For a service doing full-text search over gigabytes, you'd want something else. Match the tool to the workload.
The other tradeoff is scale: SQLite doesn't scale past a single server. True. If that becomes the bottleneck, the upgrade path is Postgres plus a driver of your choice, and because the Zod-validated boundary is so clean, the route handlers don't need to change — only the src/db/queries.ts file does. That's the point of sharp boundaries. They move the blast radius.
The migrator
For a starter, the migration system should be the smallest thing that could possibly work. Here it is in full:
export function migrate(db: DB): void {
db.exec(`
CREATE TABLE IF NOT EXISTS migrations (
version TEXT PRIMARY KEY,
applied_at TEXT NOT NULL
);
`);
const applied = new Set(
db.prepare('SELECT version FROM migrations').all()
.map((r) => (r as { version: string }).version),
);
const files = readdirSync(MIGRATIONS_DIR)
.filter((f) => f.endsWith('.sql'))
.sort();
const insertApplied = db.prepare(
'INSERT INTO migrations (version, applied_at) VALUES (?, ?)',
);
for (const file of files) {
if (applied.has(file)) continue;
const sql = readFileSync(join(MIGRATIONS_DIR, file), 'utf8');
const tx = db.transaction(() => {
db.exec(sql);
insertApplied.run(file, new Date().toISOString());
});
tx();
}
}
Twenty lines. No DSL, no down-migrations, no config. You write SQL files in migrations/ with sortable names (001_initial.sql, 002_add_index.sql), the migrator applies the ones that haven't run yet, each inside a transaction. If you need to undo a migration, write a new one that undoes the previous. For a small service, more machinery is more bugs, and the "what if the rollback script has a bug" problem is worse than the "I wrote a new migration that fixes the previous one" problem.
What's deliberately excluded
The instinct when writing a starter is to add things. Auth, because everyone needs auth eventually. Redis, because someday you'll cache. A job queue, because obviously you'll want background work. A logger framework with levels and transports, because printf is unprofessional.
This starter ships none of that, on purpose.
Every extra dependency is a decision you're making for the next project before you know what the project is. If you add Redis to the starter, every project that starts from it has Redis running in its Dockerfile, whether it needs caching or not. If you add passport.js, every project inherits a specific auth model, even the internal tools that should just use a header check. Starters should subtract, not add. When you need caching, add it in its own commit, with its own justification, and the existing code is so small it won't fight you.
The JSON request logger is 20 lines of plain console.log(JSON.stringify(...)). Not because pino is bad — pino is great — but because a starter's logger should be "print a line to stdout" and the platform (systemd, Docker, Cloud Run) picks it up unchanged. When you need levels, sampling, or structured context beyond the five fields in the log line, swap in pino as its own commit.
Try it in 30 seconds
git clone https://github.com/sen-ltd/ts-api-starter
cd ts-api-starter
docker build -t ts-api-starter .
docker run --rm -p 8000:8000 ts-api-starter
curl -sS http://localhost:8000/health
# {"status":"ok","version":"0.1.0","db_ok":true}
curl -sS -X POST http://localhost:8000/bookmarks \
-H "Content-Type: application/json" \
-d '{"url":"https://hono.dev","title":"Hono","tags":["framework","typescript"]}'
curl -sS 'http://localhost:8000/bookmarks?tag=framework' | jq .
Or, the moment you want to build your own service:
git clone https://github.com/sen-ltd/ts-api-starter my-service
cd my-service
rm -rf .git && git init
# Rename bookmarks -> your domain, keep the boundaries.
Tradeoffs to go in eyes open
- Sync DB calls block the event loop. Fine for small queries on small services. Not fine for long full-text scans. Know your workload.
- SQLite is single-box. Horizontal scale requires a real database. The upgrade is mechanical because the queries are hand-written and isolated.
- No ORM. SQL is explicit, which is longer but obvious. If your team hates writing SQL, this is the wrong starter.
- No auth. Add it in its own commit. This starter is for the part of the stack that sits behind an auth layer, not the auth layer itself.
-
Hand-written queries mean hand-written tag filtering.
tags LIKE '%"portfolio"%'works for this starter's scale but is not a real tag index. Promote tags to their own table when that becomes the bottleneck.
Closing
Entry #136 in a 100+ portfolio series by SEN LLC. This is the first TypeScript backend entry in the series, establishing the house style for the TS API entries that follow. The next few will reuse this same validation/routing pattern on different domains to show how the boundaries hold.
Feedback welcome.

Top comments (0)