DEV Community

Olivia Craft
Olivia Craft

Posted on

Cursor Rules for Node.js: The Complete Guide to AI-Assisted Node.js Development

Cursor Rules for Node.js: The Complete Guide to AI-Assisted Node.js Development

Node.js is the runtime where "it works locally" hides the longest lie. The process starts, the HTTP server listens, and nothing in npm start tells you that the unhandled promise rejection inside a setImmediate callback is silently swallowed under Node 14 and crashes the whole process under Node 20, that the route handler reading req.body.email is one malformed payload away from a 500 that leaks the stack trace, that the 800MB JSON file the background job reads with fs.readFileSync works fine for a 50-row test fixture and OOM-kills the container on a real upload, or that the process.on('SIGTERM') handler you thought drained the queue is shadowed by an earlier listener that calls process.exit(0) before any of them run. The service ships. At 3am an alert pages you because requests hang and the connection pool is exhausted — a Redis client that was never closed on shutdown.

Then you add an AI assistant.

Cursor and Claude Code were trained on a decade of Node.js content. Most of it is pre-async/await callback soup, express examples that res.send(err) the error message straight back to the client, fs.readFileSync anywhere convenient, console.log as the logger, dotenv.config() with no validation and a crash seven levels deep when process.env.DATABASE_URL is undefined, and require('./utils/db') circular imports that return {} at startup and only blow up when the first request hits. Ask for "a simple API that fetches users," and you get an app.get with untyped req.query, a DB call inside a try/catch that swallows the error, and a 200 response on failure because the handler forgot to return.

The fix is .cursorrules — one file in the repo that tells the AI what idiomatic modern Node.js looks like. Eight rules below, each with the failure mode, the rule, and a before/after. Copy-paste .cursorrules at the end.

How Cursor Rules Work for Node.js Projects

Cursor reads project rules from two locations: .cursorrules (a single file at the repo root, still supported) and .cursor/rules/*.mdc (modular files with frontmatter, recommended for anything bigger than a single service). For Node.js I recommend modular rules so a worker service's streaming conventions don't bleed into an HTTP API's request-handler constraints:

.cursor/rules/
  node-core.mdc          # async/await, error handling, modules
  node-config.mdc        # env validation, startup invariants
  node-http.mdc          # route handlers, Zod, error middleware
  node-streams.mdc       # stream-based I/O, backpressure
  node-ops.mdc           # graceful shutdown, logging, DI
Enter fullscreen mode Exit fullscreen mode

Frontmatter controls activation: globs: ["**/*.{ts,js,mjs}"] with alwaysApply: false. Now the rules.

Rule 1: Async Error Handling — Propagate, Never Swallow

The most common AI failure in Node.js is the try/catch that "handles" an error by logging it and continuing as if nothing happened. Cursor generates try { const user = await db.getUser(id); } catch (e) { console.log(e); } — the route returns 200 OK with undefined in the body, the caller assumes success, and the bug surfaces three services downstream. The second most common failure is the un-awaited promise: db.write(record) with no await, fired off into the microtask queue, and any rejection becomes an unhandledRejection that in Node 15+ terminates the process.

The rule:

Every async function either awaits every promise it creates or returns
the promise for the caller to await. Never fire-and-forget without
`.catch()` — bare `somePromise();` is a lint error.

try/catch is for transforming errors (wrapping with context) or for
cleanup (finally). It is NEVER a place to log-and-swallow. If you catch,
you either re-throw (often wrapped) or handle it as a first-class
result — not both.

All errors extend a project `AppError` class with a `code`, `statusCode`,
and `cause`. Construct with `new NotFoundError('user', { cause: err })`.
Never throw plain strings. Never throw plain `Error` in library code.

Route handlers use `asyncHandler(fn)` or framework-native async support
(Fastify, Hono, Koa). Never hand-rolled `.then().catch(next)`. The
central error middleware is the single place that maps AppError ->
HTTP response; handlers never call `res.status(500)` directly.

`process.on('unhandledRejection')` logs + exits. `process.on(
'uncaughtException')` logs + exits. Never swallow at the process level.
Enter fullscreen mode Exit fullscreen mode

Before — swallowed error, fire-and-forget write, leaked stack trace:

app.get('/users/:id', async (req, res) => {
  try {
    const user = await db.getUser(req.params.id);
    db.recordAccess(req.params.id); // unawaited — rejection will crash process
    res.json(user);
  } catch (e) {
    console.log('error:', e);
    res.send(e.message); // leaks internals, status is still 200
  }
});
Enter fullscreen mode Exit fullscreen mode

Three bugs: the undefined body on error, the unhandled rejection from recordAccess, and the 200 OK with an error message.

After — typed errors, awaited side effects, central mapping:

class NotFoundError extends AppError {
  constructor(resource: string, opts?: { cause?: unknown }) {
    super(`${resource} not found`, { code: 'NOT_FOUND', statusCode: 404, ...opts });
  }
}

app.get('/users/:id', asyncHandler(async (req, res) => {
  const user = await db.getUser(req.params.id);
  if (!user) throw new NotFoundError('user');
  await db.recordAccess(req.params.id);
  res.json(user);
}));

app.use((err: unknown, req: Request, res: Response, next: NextFunction) => {
  if (err instanceof AppError) {
    logger.warn({ err, reqId: req.id }, 'handled error');
    return res.status(err.statusCode).json({ code: err.code, message: err.message });
  }
  logger.error({ err, reqId: req.id }, 'unhandled error');
  res.status(500).json({ code: 'INTERNAL', message: 'Internal error' });
});

process.on('unhandledRejection', (reason) => {
  logger.fatal({ err: reason }, 'unhandledRejection');
  process.exit(1);
});
Enter fullscreen mode Exit fullscreen mode

Missing user returns a proper 404 with a stable error code. recordAccess is awaited, so its rejection hits the error middleware. No stack trace ever reaches the client.

Rule 2: Module Boundaries — No Circular Imports, Explicit Public API

Node's module resolver happily returns a half-constructed {} when two files import each other during the initial load. The bug is invisible at startup — the export the first file wanted wasn't defined yet — and surfaces the moment the first request reads undefined.something. Cursor writes circular imports without blinking because JavaScript doesn't warn about them, and because the "put the helper in the same folder" instinct creates them faster than you can notice. The second failure mode is the barrel file (index.ts) that re-exports every internal helper, erasing the distinction between the module's public API and its innards.

The rule:

Each folder is a module. Exactly one `index.ts` per module, re-exporting
ONLY the symbols that are part of the public API. Importers outside
the module reach in through `./users` (the index), never
`./users/internal/repository`.

No circular imports. Enforce with `eslint-plugin-import` rule
`import/no-cycle`: 'error' in CI. If a cycle appears, one of the two
modules has too many responsibilities — split it or extract the shared
type into a third module.

Import aliases: `@/users`, `@/orders`. No `../../../../` paths. Configure
`tsconfig.paths` + runtime support (tsx, tsconfig-paths, or node --experimental-specifier-resolution).

Types-only imports use `import type { ... }`. Prevents circular type
imports from pulling runtime code.

Every module exports a single service/factory function. No "util"
modules that accrete unrelated helpers. No module-level side effects
(`db.connect()` on import) — all init happens in an explicit `start()`.
Enter fullscreen mode Exit fullscreen mode

Before — circular dependency, side effect on import, barrel re-exports everything:

// users/repository.js
const { getOrderCount } = require('../orders/service'); // depends on orders

module.exports = {
  async getUser(id) {
    const user = await db.get(id);
    user.orderCount = await getOrderCount(user.id);
    return user;
  },
};

// orders/service.js
const { getUser } = require('../users/repository'); // cycle!

module.exports = {
  async getOrderCount(userId) {
    const u = await getUser(userId); // will be undefined on first load
    return u.orders.length;
  },
};

// users/index.js — re-exports every internal
module.exports = require('./repository');
module.exports.internalHasher = require('./internal/hasher');
Enter fullscreen mode Exit fullscreen mode

First request into either service sees undefined from the other — the import returned {} before the second module finished loading.

After — one-way dependency, explicit public API, types-only at the boundary:

// users/repository.ts  (internal — not exported from index)
import type { Database } from '@/infra/db';

export class UserRepository {
  constructor(private readonly db: Database) {}
  async findById(id: string): Promise<User | null> {
    return this.db.users.findUnique({ where: { id } });
  }
}

// users/service.ts
import type { OrderCounter } from '@/orders'; // types-only, no runtime cycle
import { UserRepository } from './repository';

export class UserService {
  constructor(
    private readonly repo: UserRepository,
    private readonly orderCounter: OrderCounter,
  ) {}

  async getUser(id: string): Promise<UserWithOrders | null> {
    const user = await this.repo.findById(id);
    if (!user) return null;
    return { ...user, orderCount: await this.orderCounter.countFor(user.id) };
  }
}

// users/index.ts — the public API, exactly three lines
export { UserService } from './service';
export type { User, UserWithOrders } from './types';
export type OrderCounter = { countFor(userId: string): Promise<number> };
Enter fullscreen mode Exit fullscreen mode

orders depends on users' OrderCounter type. users doesn't import orders at all — the concrete implementation is injected at composition root. No cycle. Lint catches any future one.

Rule 3: Config Validation at Startup — Fail Fast, Fail Loud

process.env.PORT is a string or undefined. Cursor writes const port = process.env.PORT || 3000, then app.listen(port) works, and you ship. Six weeks later someone sets PORT=three-thousand in the staging .env and the service starts on port NaN, which the OS interprets as a random free port. The healthcheck passes because it's internal. Traffic never arrives. You find out when the PM asks why the feature they shipped doesn't work. Same story with database URLs (a typo routes writes to an empty dev instance), feature flags (a missing one becomes false), and secret keys (a missing one becomes undefined and signatures verify as undefined === undefined).

The rule:

Every env variable the app reads goes through a schema validator (Zod,
Envalid, Joi, or TypeBox) at startup. No `process.env.X` access in
application code — read from the validated `config` object.

The schema is the single source of truth for required vs optional,
types (number, boolean, URL, enum), and defaults. Non-conforming env
is a fatal startup error with a human-readable list of what's wrong.

Secrets come from env, not files. Refuse to start in production if any
secret < 16 chars or matches a known-weak default ('changeme', 'secret',
'password').

`NODE_ENV` is one of `'development' | 'test' | 'production'` with
strict checking — never `if (process.env.NODE_ENV === 'prod')`.

Config is frozen (`Object.freeze`) after construction. No runtime
mutation. No `config.featureX = true` in tests — swap the whole config.
Enter fullscreen mode Exit fullscreen mode

Before — unvalidated env, silent defaults, typo in production:

// config.js
module.exports = {
  port: process.env.PORT || 3000,
  dbUrl: process.env.DATABASE_URL,
  jwtSecret: process.env.JWT_SECRET || 'secret',
  maxUploadMb: process.env.MAX_UPLOAD_MB || 10,
};

// server.js
const config = require('./config');
app.listen(config.port);
// config.maxUploadMb === '10' — a string. Works until someone does
// config.maxUploadMb * 1024 * 1024 and it's still a string concat.
Enter fullscreen mode Exit fullscreen mode

Missing DATABASE_URL is silently undefined. JWT_SECRET defaults to 'secret' — the first attacker who reads the repo bypasses auth.

After — Zod schema, exhaustive parsing, fail-fast boot:

import { z } from 'zod';

const EnvSchema = z.object({
  NODE_ENV: z.enum(['development', 'test', 'production']),
  PORT: z.coerce.number().int().positive().default(3000),
  DATABASE_URL: z.string().url(),
  JWT_SECRET: z.string().min(32, 'JWT_SECRET must be ≥32 chars'),
  MAX_UPLOAD_MB: z.coerce.number().int().positive().max(500).default(10),
  LOG_LEVEL: z.enum(['trace', 'debug', 'info', 'warn', 'error']).default('info'),
});

export type Config = z.infer<typeof EnvSchema>;

function loadConfig(): Readonly<Config> {
  const result = EnvSchema.safeParse(process.env);
  if (!result.success) {
    console.error('Invalid environment:');
    for (const issue of result.error.issues) {
      console.error(`  ${issue.path.join('.')}: ${issue.message}`);
    }
    process.exit(1);
  }
  if (result.data.NODE_ENV === 'production' &&
      ['changeme', 'secret', 'password'].some(w => result.data.JWT_SECRET.toLowerCase().includes(w))) {
    console.error('JWT_SECRET looks weak. Refusing to start.');
    process.exit(1);
  }
  return Object.freeze(result.data);
}

export const config = loadConfig();
Enter fullscreen mode Exit fullscreen mode

Missing, malformed, or weak config fails at boot with a precise error. Downstream code reads config.maxUploadMb as a number. NODE_ENV is a compile-time enum.

Rule 4: Graceful Shutdown — Drain Connections, Close Resources, Exit on Your Terms

Kubernetes sends SIGTERM and waits 30 seconds before SIGKILL. A Node process that doesn't handle SIGTERM drops in-flight requests, leaks DB connections, and corrupts whatever transaction was mid-write. Cursor's default HTTP server is app.listen(port) with no shutdown handling at all. The close-but-wrong version registers a SIGTERM handler that calls process.exit(0) immediately, which is equivalent to SIGKILL for any work still in flight. The subtly-wrong version calls server.close() but doesn't drain the HTTP/1.1 keep-alive pool — close hangs forever, the timeout kicks in, and you're back to data loss.

The rule:

`main()` registers shutdown handlers for SIGTERM and SIGINT. Ignore
SIGPIPE. Never register for SIGKILL (you can't).

Shutdown is idempotent and timeboxed:
  1. Stop accepting new work (server.close(), queue.pause()).
  2. Wait for in-flight work to finish OR the deadline (e.g. 25s for
     a k8s 30s grace window).
  3. Close resources in reverse init order (HTTP server ->
     message consumers -> DB pools -> loggers).
  4. If anything hangs, log what held the process and exit(1).

Use `http.Server.closeAllConnections()` (Node 18.2+) after close() to
kill keep-alive sockets. Otherwise close() waits indefinitely.

Every long-lived resource (DB pool, Redis, queue consumer, cron) has
a `.stop()` method returning a Promise. The composition root keeps a
list and awaits them all on shutdown.

No `process.exit(0)` inside handlers. Let the event loop drain and
exit naturally, OR exit(1) on timeout. Never mix the two.
Enter fullscreen mode Exit fullscreen mode

Before — abrupt exit, dropped requests, leaked pool:

const app = express();
// ...
const server = app.listen(3000);

process.on('SIGTERM', () => {
  console.log('shutting down');
  process.exit(0); // kills in-flight requests, no DB drain
});
Enter fullscreen mode Exit fullscreen mode

In-flight requests return 502 at the load balancer. DB pool sockets are half-closed; next pod restart hits a stale connection limit.

After — timeboxed drain, ordered teardown, exit on deadline:

async function main() {
  const db = await createDbPool(config);
  const redis = await createRedis(config);
  const queue = await startQueueConsumer({ db });
  const server = createHttpServer({ db, redis });
  await new Promise<void>(resolve => server.listen(config.PORT, resolve));
  logger.info({ port: config.PORT }, 'listening');

  const shutdown = async (signal: NodeJS.Signals) => {
    logger.info({ signal }, 'shutdown: begin');
    const deadline = setTimeout(() => {
      logger.error('shutdown: deadline exceeded, forcing exit');
      process.exit(1);
    }, 25_000);
    deadline.unref();

    try {
      await new Promise<void>((resolve, reject) =>
        server.close(err => (err ? reject(err) : resolve())));
      server.closeAllConnections();
      await queue.stop();
      await redis.quit();
      await db.end();
      logger.info('shutdown: clean');
      process.exit(0);
    } catch (err) {
      logger.error({ err }, 'shutdown: error during teardown');
      process.exit(1);
    }
  };

  let shuttingDown = false;
  for (const sig of ['SIGTERM', 'SIGINT'] as const) {
    process.on(sig, () => {
      if (shuttingDown) return;
      shuttingDown = true;
      void shutdown(sig);
    });
  }
}
Enter fullscreen mode Exit fullscreen mode

Idempotent (second SIGTERM is a no-op). Deadline guarantees the pod terminates. Teardown is ordered so no in-flight request hits a closed DB.

Rule 5: Stream-Based I/O — Backpressure Over Buffering

fs.readFile('huge.csv') loads the whole file into a Buffer. For a 2GB file that's 2GB of heap, which under Node's default --max-old-space-size=2048 means ENOMEM. Cursor reaches for readFile + JSON.parse + map because it reads naturally, and because the typical training example is a 3KB fixture. The same mistake shows up in HTTP responses that res.send(entireArray) a million-row query result, in S3 uploads that await Body.transformToByteArray() before re-uploading, and in "transformations" that data.forEach(row => otherStream.write(row)) and ignore the return value of write — the buffer fills, the process RSS climbs, and the OOM-killer takes the container out.

The rule:

For inputs larger than config.MAX_SMALL_PAYLOAD (default 1MB), use
streams: fs.createReadStream, got.stream, s3.getObject's Body stream,
node:stream/web ReadableStream.

Pipelines use `stream/promises` `pipeline()` (not `.pipe()`). pipeline
awaits, handles errors end-to-end, destroys every stream on failure.
Never chain `.pipe()` manually — errors leak resources.

Transforms respect backpressure: `transform(chunk, enc, cb)` only calls
cb() when downstream drained. For parallel async work, use
`Transform` with `objectMode` and a bounded concurrency helper.

Parsing line-oriented or delimited data: stream parsers (fast-csv,
split2, JSONStream, ndjson) — never `buffer.toString().split('\n')`.

HTTP responses that could be >1MB stream: `pipeline(source, res)`.
Set Content-Type and Transfer-Encoding: chunked; never Content-Length
ahead of time for a stream.

Read the return value of stream.write(). If false, wait for 'drain'
before writing more.
Enter fullscreen mode Exit fullscreen mode

Before — full buffer load, OOM on any real file:

app.get('/export/users.csv', async (req, res) => {
  const users = await db.users.findAll(); // all rows into memory
  const csv = users.map(u => `${u.id},${u.email}`).join('\n'); // string concat in memory
  res.type('text/csv').send(csv);
});
Enter fullscreen mode Exit fullscreen mode

Works on 1000 rows. At 1M rows the process RSS hits 3GB and the container restarts.

After — server-side cursor, streaming CSV encode, pipeline with backpressure:

import { pipeline } from 'node:stream/promises';
import { Transform } from 'node:stream';

app.get('/export/users.csv', asyncHandler(async (req, res) => {
  res.type('text/csv');
  res.setHeader('Content-Disposition', 'attachment; filename="users.csv"');

  const cursor = db.users.cursor({ batchSize: 500 }); // server-side cursor

  const toCsv = new Transform({
    objectMode: true,
    transform(user: User, _enc, cb) {
      cb(null, `${user.id},${JSON.stringify(user.email)}\n`);
    },
  });

  res.write('id,email\n');
  await pipeline(cursor, toCsv, res);
}));
Enter fullscreen mode Exit fullscreen mode

Memory stays flat regardless of table size. Backpressure propagates from res through toCsv into the DB cursor — slow clients slow the query instead of buffering the world.

Rule 6: Structured Logging With Context — Never console.log in Production

console.log('user', user) prints a plain-text blob to stdout that the log aggregator indexes as free text. No severity, no request ID, no trace ID, no machine-parseable fields. Cursor writes this everywhere because the training data is full of tutorials. Production debugging becomes grep-and-hope. Worse, console.log in a hot path serializes on every call through the same stdout handle — at high throughput it becomes a bottleneck, visible as event-loop lag spikes that look like your DB is slow. The fix is a structured logger (Pino, Winston, Bunyan) with request-scoped context propagated via AsyncLocalStorage.

The rule:

Single logger module exports a Pino instance configured from config.
No `console.log` / `console.error` / `console.debug` in application
code. ESLint rule `no-console`: error, with allowances only in CLI
entry scripts.

Logs are JSON in production, pretty-printed in development (pino-pretty).
Every log line has a level, message, timestamp, and context object —
never concatenate values into the message.

Request context (reqId, userId, traceId) lives in an
`AsyncLocalStorage` store set by a middleware on request entry. The
logger reads from the store via `mixin`. Handlers call `logger.info(
{ orderId }, 'order created')` — the reqId is attached automatically.

Redact secrets: Pino's `redact: ['req.headers.authorization',
'*.password', '*.token', '*.apiKey']`.

Log levels are stable:
  fatal — process will exit
  error — unexpected, paged
  warn  — expected-but-noteworthy (4xx, retryable failures)
  info  — lifecycle events (boot, shutdown, key business events)
  debug — detail for investigations, off in prod
  trace — very verbose, local only
Enter fullscreen mode Exit fullscreen mode

Before — console.log, no context, secret in the log:

app.post('/login', async (req, res) => {
  console.log('login attempt', req.body); // password is in req.body!
  const user = await db.users.findByEmail(req.body.email);
  if (!user || !await verify(req.body.password, user.hash)) {
    console.log('login failed for', req.body.email);
    return res.status(401).send('invalid');
  }
  console.log('login ok', user.id);
  res.json({ token: sign(user) });
});
Enter fullscreen mode Exit fullscreen mode

Three log lines per request, none correlated. Password exposed in log aggregator. No way to tie the three lines to one request in concurrent traffic.

After — Pino, ALS context, redaction, structured fields:

// logger.ts
import pino from 'pino';
import { AsyncLocalStorage } from 'node:async_hooks';

export const requestStore = new AsyncLocalStorage<{ reqId: string; userId?: string }>();

export const logger = pino({
  level: config.LOG_LEVEL,
  redact: { paths: ['req.headers.authorization', '*.password', '*.token'], censor: '[REDACTED]' },
  mixin: () => requestStore.getStore() ?? {},
  transport: config.NODE_ENV === 'development'
    ? { target: 'pino-pretty', options: { colorize: true } }
    : undefined,
});

// middleware/request-context.ts
export function requestContext(req: Request, res: Response, next: NextFunction) {
  const reqId = req.headers['x-request-id']?.toString() ?? crypto.randomUUID();
  res.setHeader('x-request-id', reqId);
  requestStore.run({ reqId }, next);
}

// routes/auth.ts
app.post('/login', asyncHandler(async (req, res) => {
  const { email, password } = LoginSchema.parse(req.body);
  const user = await db.users.findByEmail(email);
  if (!user || !await verify(password, user.hash)) {
    logger.warn({ email }, 'login: invalid credentials');
    throw new UnauthorizedError('invalid credentials');
  }
  logger.info({ userId: user.id }, 'login: success');
  res.json({ token: sign(user) });
}));
Enter fullscreen mode Exit fullscreen mode

Every log line in a request carries the same reqId. Password never hits the log. Level-correct (warn for expected 401s, not error). JSON in prod, tailable in dev.

Rule 7: Dependency Injection — Composition Root Over Ambient Singletons

The canonical Node.js service has a db.js that exports module.exports = new Pool(...) at module-load time. Every file that needs the pool does const db = require('./db'). It works, and it makes tests impossible — you can't swap the DB for the test without mangling require.cache. Cursor generates this shape because nine out of ten Stack Overflow answers do. The structured alternative is a composition root: one place at startup that constructs the concrete implementations, injects them via constructors, and hands the fully-wired app back to main(). Tests construct the same graph with fakes. No module mocking, no jest.mock, no per-test patch.

The rule:

No module-level `new Pool()`, `new RedisClient()`, or service singletons.
A `createX(deps, config)` factory per module returns the wired instance.

Constructors take an explicit `deps` object. Types at the boundary
use narrow interfaces (`UserRepo { findById(id): Promise<User | null>
}`) — never the concrete class. Production wires the concrete, tests
wire a fake that implements the interface.

The composition root is `src/app.ts` (or `src/main.ts`). It reads
config, constructs infra (db, redis, queue), constructs services with
the infra, constructs the HTTP app with the services, and returns
the whole graph. `main()` calls `await start(app)`.

Tests import the composition root helpers. A typical integration test
is `const app = await buildAppForTest({ userRepo: fakeRepo })`. No
jest.mock. No proxyquire. No require.cache surgery.

Global state (logger, config) is ALLOWED because it is immutable and
not IO. DB, HTTP, queues, clocks, and randomness are NEVER global —
inject a Clock / Uuid port so time-dependent tests are deterministic.
Enter fullscreen mode Exit fullscreen mode

Before — module-level singleton, impossible to unit test:

// db.js
const { Pool } = require('pg');
module.exports = new Pool({ connectionString: process.env.DATABASE_URL });

// users.js
const db = require('./db');
async function getUser(id) {
  const { rows } = await db.query('SELECT * FROM users WHERE id=$1', [id]);
  return rows[0];
}
module.exports = { getUser };
Enter fullscreen mode Exit fullscreen mode

To test getUser you need a real DB or jest.mock('./db'). Both are brittle.

After — factory + constructor injection, explicit composition root:

// users/repository.ts
export interface UserRepo {
  findById(id: string): Promise<User | null>;
}

export class PgUserRepo implements UserRepo {
  constructor(private readonly db: Pool) {}
  async findById(id: string): Promise<User | null> {
    const { rows } = await this.db.query('SELECT * FROM users WHERE id=$1', [id]);
    return rows[0] ?? null;
  }
}

// users/service.ts
export class UserService {
  constructor(private readonly repo: UserRepo, private readonly clock: Clock) {}
  async getUser(id: string) {
    const user = await this.repo.findById(id);
    if (!user) throw new NotFoundError('user');
    return { ...user, fetchedAt: this.clock.now() };
  }
}

// app.ts — composition root
export async function buildApp(config: Config): Promise<App> {
  const db = new Pool({ connectionString: config.DATABASE_URL });
  const clock: Clock = { now: () => new Date() };
  const userRepo = new PgUserRepo(db);
  const userService = new UserService(userRepo, clock);
  const server = buildHttpServer({ userService });
  return { server, db, userService, stop: () => db.end() };
}

// tests — no mocks, just a fake repo
const fakeRepo: UserRepo = { findById: async () => ({ id: '1', email: 'a@b.c' }) };
const frozenClock: Clock = { now: () => new Date('2026-01-01') };
const service = new UserService(fakeRepo, frozenClock);
expect(await service.getUser('1')).toEqual({ id: '1', email: 'a@b.c', fetchedAt: new Date('2026-01-01') });
Enter fullscreen mode Exit fullscreen mode

Tests construct the graph with fakes. Production wires the real DB. No jest.mock, no ambient singletons, no hidden state.

Rule 8: Type-Safe Route Handlers — Zod at the Boundary, Inferred Everywhere

Every untyped route handler is req.body.email waiting to explode. Cursor writes const { email, password } = req.body; with no validation — a client sends { email: ['a', 'b'] } and the downstream db.users.findByEmail(email) runs with an array. Express's req.body, req.query, and req.params are typed any by default. The fix is schema validation at the boundary with Zod (or Valibot, TypeBox, io-ts) that both validates and narrows: once LoginSchema.parse(req.body) returns, TypeScript knows the exact shape inside the handler. Combined with a typed request wrapper or a router like Fastify with JSON Schema, the whole handler is checked end-to-end.

The rule:

Every route handler validates `req.body`, `req.query`, `req.params`
with a Zod schema before using them. Validation failures throw
`ZodError`, caught by the error middleware and mapped to 400.

Response bodies are typed. Fastify: `schema: { response: {...} }`.
Express: the handler's return type is declared, and an `asJson(schema,
value)` helper validates in dev and no-ops in prod.

Shared schemas live in `@/schemas/*` and are imported by both
client code (for form validation) and the server. One source of
truth per contract.

Where possible, use a framework with first-class typing:
Fastify + @fastify/type-provider-typebox, Hono + zValidator,
ts-rest, tRPC. Plain express + Zod is fine — but treat `req.body`
as `unknown` and narrow via the schema.

Refinements live in the schema, not the handler:
`z.string().email().toLowerCase().trim()` — the handler receives
clean data.
Enter fullscreen mode Exit fullscreen mode

Before — untyped body, no validation, runtime surprise:

app.post('/users', async (req, res) => {
  const { email, password, age } = req.body;
  // email could be array, password could be undefined, age could be string
  const hash = await bcrypt.hash(password, 10); // crashes if password is undefined
  const user = await db.users.insert({ email, hash, age });
  res.json(user);
});
Enter fullscreen mode Exit fullscreen mode

Every production incident is a payload shape the author didn't consider.

After — Zod schema, inferred types, error middleware handles validation:

// schemas/user.ts
export const CreateUserSchema = z.object({
  email: z.string().email().toLowerCase().trim(),
  password: z.string().min(8).max(128),
  age: z.number().int().min(13).max(130),
});
export type CreateUserInput = z.infer<typeof CreateUserSchema>;

// routes/users.ts
app.post('/users', asyncHandler(async (req, res) => {
  const input: CreateUserInput = CreateUserSchema.parse(req.body);
  // input.email is string, input.password is string, input.age is number — guaranteed.
  const hash = await bcrypt.hash(input.password, 10);
  const user = await userService.create({ email: input.email, hash, age: input.age });
  res.status(201).json(UserResponseSchema.parse(user));
}));

// middleware/errors.ts — maps ZodError -> 400
app.use((err, req, res, next) => {
  if (err instanceof ZodError) {
    return res.status(400).json({
      code: 'VALIDATION_ERROR',
      issues: err.issues.map(i => ({ path: i.path, message: i.message })),
    });
  }
  next(err);
});
Enter fullscreen mode Exit fullscreen mode

Malformed payload returns 400 with a precise issue list. input is fully typed. Response is validated too — the server catches its own contract breaks in dev.

The Complete .cursorrules File

Drop this in the repo root. Cursor and Claude Code both pick it up.

# Node.js — Production Patterns

## Async Error Handling
- Every async call is awaited or its promise returned. No bare
  `somePromise();` — lint error.
- try/catch only to wrap-with-context (re-throw) or to clean up
  (finally). Never log-and-swallow.
- All errors extend AppError with code/statusCode/cause. Never throw
  strings. Never throw plain Error in library code.
- Route handlers use asyncHandler / framework async. Central error
  middleware is the only place that maps errors to HTTP. Handlers
  never call res.status(500) directly.
- process.on('unhandledRejection'/'uncaughtException'): log + exit(1).

## Module Boundaries
- One folder = one module. index.ts re-exports only the public API.
  Importers outside reach in via `./users`, not `./users/internal/...`.
- `eslint-plugin-import/no-cycle`: error, enforced in CI.
- Import aliases (`@/users`), never `../../../../`. Types-only imports
  with `import type`.
- Each module exports a factory. No module-level side effects
  (db.connect on import) — init in explicit start().

## Config Validation at Startup
- All env access goes through a Zod/Envalid schema. No
  `process.env.X` in app code.
- Invalid env is fatal at boot with a human-readable issue list.
  Weak/default secrets rejected in production.
- Config frozen with Object.freeze. No runtime mutation, no
  per-test patching — swap the whole config.
- NODE_ENV is a strict enum.

## Graceful Shutdown
- main() registers SIGTERM + SIGINT. Shutdown is idempotent and
  timeboxed (e.g. 25s under a 30s k8s grace window).
- Teardown order: stop accepting work -> drain in-flight ->
  close resources in reverse init order -> exit.
- server.close() + server.closeAllConnections() (Node 18.2+). Every
  long-lived resource has a .stop() returning a Promise.
- No process.exit(0) inside handlers. Exit(1) only on timeout.

## Stream-Based I/O
- Inputs > ~1MB: streams (fs.createReadStream, cursor iteration,
  got.stream). Never fs.readFileSync on user data.
- stream/promises.pipeline() — never manual .pipe() chains.
- Transforms respect backpressure. Parallelism via bounded helpers.
- Line/delimited parsing with stream parsers — never buffer.toString().split.
- HTTP responses potentially > 1MB stream; no Content-Length ahead of
  streamed content.

## Structured Logging
- Single Pino (or equivalent) logger. `no-console` ESLint error
  outside CLI entry scripts.
- JSON in prod, pretty in dev. Every line: level, msg, timestamp,
  context object — never string concatenation.
- AsyncLocalStorage holds reqId / userId / traceId; logger's `mixin`
  reads from it.
- Redact secrets (authorization, password, token, apiKey).
- Stable level semantics: fatal/error/warn/info/debug/trace.

## Dependency Injection
- No module-level `new Pool()`/`new Redis()` singletons. Factories
  `createX(deps, config)`.
- Constructors take explicit `deps`. Narrow interfaces at the
  boundary, concrete classes behind them.
- Composition root (`src/app.ts`) wires everything. main() awaits it.
- Tests build the graph with fakes — no jest.mock, no proxyquire, no
  require.cache surgery.
- Clock / Uuid / Random are injected ports — never Date.now()/Math.random
  in business logic.

## Type-Safe Route Handlers
- Every handler validates req.body/query/params with a Zod schema.
  Validation failures -> ZodError -> 400 via error middleware.
- Response bodies typed; Fastify JSON Schema or Express `asJson(
  schema, value)` dev-time validation.
- Shared schemas in @/schemas/* imported by client and server.
- Refinements (trim, toLowerCase, email, url) live in the schema,
  not the handler.
Enter fullscreen mode Exit fullscreen mode

End-to-End Example: A POST /orders Endpoint

Without rules: ambient DB, unvalidated body, swallowed error, console.log, no shutdown, buffered export.

const db = require('./db'); // module-level singleton

app.post('/orders', async (req, res) => {
  try {
    console.log('creating order', req.body);
    const order = await db.query(
      'INSERT INTO orders(user_id, total) VALUES($1,$2) RETURNING *',
      [req.body.userId, req.body.total],
    );
    db.query('INSERT INTO audit(...) VALUES(...)'); // unawaited
    res.send(order.rows[0]);
  } catch (e) {
    console.log(e);
    res.send({ error: e.message });
  }
});

app.listen(3000);
Enter fullscreen mode Exit fullscreen mode

With rules: Zod schema, injected service, typed errors, structured log, awaited side effect, graceful shutdown elsewhere.

// schemas/orders.ts
export const CreateOrderSchema = z.object({
  userId: z.string().uuid(),
  total: z.number().positive().max(1_000_000),
});
export type CreateOrderInput = z.infer<typeof CreateOrderSchema>;

// routes/orders.ts
export function ordersRouter(deps: { orderService: OrderService }) {
  const router = Router();
  router.post('/orders', asyncHandler(async (req, res) => {
    const input = CreateOrderSchema.parse(req.body);
    const order = await deps.orderService.create(input);
    logger.info({ orderId: order.id, userId: input.userId }, 'order: created');
    res.status(201).json(OrderResponseSchema.parse(order));
  }));
  return router;
}

// services/order-service.ts
export class OrderService {
  constructor(
    private readonly repo: OrderRepo,
    private readonly audit: AuditLog,
    private readonly clock: Clock,
  ) {}

  async create(input: CreateOrderInput): Promise<Order> {
    const order = await this.repo.insert({ ...input, createdAt: this.clock.now() });
    await this.audit.record({ kind: 'order.created', orderId: order.id });
    return order;
  }
}
Enter fullscreen mode Exit fullscreen mode

Malformed body → 400 with issue list. Service failure → handled error middleware → 500 with stable code, no stack leak. audit.record is awaited. Log line has reqId, orderId, userId. Shutdown drains the pool.

Get the Full Pack

These eight rules cover the Node.js patterns where AI assistants consistently reach for the wrong idiom. Drop them into .cursorrules and the next prompt you write will look different — awaited, typed, validated, streamed, logged, injected, shutdown-clean Node.js, without having to re-prompt.

If you want the expanded pack — these eight plus rules for Fastify route schemas, BullMQ job patterns, Prisma transactions, testcontainers-based integration tests, Dockerfile layering for Node services, OpenTelemetry traces, rate-limiting, and the testing conventions I use on production Node services — it is bundled in Cursor Rules Pack v2 ($27, one payment, lifetime updates). Drop it in your repo, stop fighting your AI, ship Node.js you would actually merge.

Top comments (0)