This was originally published on rollgate.io/blog/feature-flags-nodejs.
Why Feature Flags in Node.js?
Node.js powers a huge slice of production backends — REST APIs, GraphQL gateways, background workers, BFF layers, real-time services. All of them share the same release problem: you want to ship code continuously, but you do not want every deploy to be a product change.
Feature flags in Node.js decouple deployment from release. You push code to production behind a flag and decide later who sees the new behavior, when, and under what conditions. If something breaks, you flip the flag off in the dashboard — no redeploy, no rollback PR, no pager at 3am.
This guide covers the practical side: how to wire feature flags into Express and Fastify applications, how to target specific users, how to roll out gradually, and the production gotchas that every team hits sooner or later.
Quick Start: Feature Flags in Node.js
Let us get a flag running end-to-end. Install the SDK:
npm install @rollgate/sdk-node
Then wire it up:
import { RollgateClient } from '@rollgate/sdk-node';
const rollgate = new RollgateClient({
apiKey: process.env.ROLLGATE_API_KEY!,
enableStreaming: true, // real-time updates over SSE
});
await rollgate.init();
if (rollgate.isEnabled('new-checkout', false)) {
console.log('New checkout flow enabled');
} else {
console.log('Legacy checkout');
}
That is the whole setup. The SDK pulls rules from the API, caches them in memory, and keeps them fresh in the background. With enableStreaming: true the client keeps a Server-Sent Events connection open and applies changes within ~50ms of a flag flip. The second argument to isEnabled is the default value returned if the client is not yet initialized or the flag does not exist.
Evaluation is local, in-process. No network hop per flag check — the rules are already in memory, so you can evaluate thousands of flags per request without adding latency to the hot path.
The DIY Approach (and Its Limitations)
Before reaching for a dedicated platform, most teams start with environment variables:
const flags = {
newCheckout: process.env.NEW_CHECKOUT === 'true',
darkMode: process.env.DARK_MODE === 'true',
};
if (flags.newCheckout) {
// ...
}
This works, for exactly one week. Then you hit the limitations:
- Every flag change requires a redeploy — the whole point of flags is to avoid that
- No gradual rollouts — it is all-or-nothing for every user
- No targeting — you cannot enable a feature for beta testers, enterprise plans, or a specific region
- No kill switch — if the new code breaks, rolling back means another deploy cycle
- No audit trail — you do not know who flipped what, when, or why
The next evolution is usually a config file or a database table. You solve the redeploy problem but inherit a new one: keeping the config in sync across every instance of your Node.js service, and refreshing it without restarts. That is where a purpose-built feature flag platform earns its keep.
Using Rollgate with Express
Express is still the workhorse of the Node.js backend ecosystem. Here is a clean pattern: attach the Rollgate client to the request via middleware, then evaluate flags inside route handlers with the user context from the request.
import express from 'express';
import { RollgateClient } from '@rollgate/sdk-node';
const app = express();
const rollgate = new RollgateClient({
apiKey: process.env.ROLLGATE_API_KEY!,
enableStreaming: true,
});
await rollgate.init();
app.use((req, res, next) => {
const userId = req.headers['x-user-id'] as string | undefined;
req.flags = {
isEnabled: (key: string, fallback = false) =>
rollgate.isEnabled(key, fallback, userId ? { userId } : undefined),
};
next();
});
app.get('/checkout', (req, res) => {
if (req.flags.isEnabled('new-checkout')) {
return res.json({ version: 'v2', flow: 'stripe-elements' });
}
return res.json({ version: 'v1', flow: 'legacy-form' });
});
app.listen(3000);
The EvalContext you pass as the third argument lets you evaluate a flag for a specific user without mutating client-level state. Each request gets its own targeting evaluation based on userId and any attributes you forward (plan, region, role, anything your targeting rules reference).
Remember to shut the client down cleanly on SIGTERM so the SSE connection and telemetry buffers drain properly:
process.on('SIGTERM', async () => {
await rollgate.close();
process.exit(0);
});
Using Rollgate with Fastify
Fastify is the faster, more opinionated alternative. The pattern is the same — a plugin that decorates the request — but with Fastify's decorator API:
import Fastify from 'fastify';
import { RollgateClient } from '@rollgate/sdk-node';
const fastify = Fastify({ logger: true });
const rollgate = new RollgateClient({
apiKey: process.env.ROLLGATE_API_KEY!,
enableStreaming: true,
});
await rollgate.init();
fastify.decorateRequest('flags', null);
fastify.addHook('onRequest', async (request) => {
const userId = request.headers['x-user-id'] as string | undefined;
request.flags = {
isEnabled: (key: string, fallback = false) =>
rollgate.isEnabled(key, fallback, userId ? { userId } : undefined),
};
});
fastify.get('/api/experiments', async (request) => {
return {
pricing: request.flags.isEnabled('new-pricing-ui'),
search: request.flags.isEnabled('semantic-search'),
};
});
fastify.addHook('onClose', async () => {
await rollgate.close();
});
await fastify.listen({ port: 3000 });
One thing to watch: if you are running Fastify with logger: true and want the flag value in every log line, add it to the request context with request.log.child({ flags: [...] }) inside the onRequest hook. Observability on which flags evaluated for which request is the kind of detail that saves you hours in an incident.
Gradual Rollouts and User Targeting
Once flags are wired, the real value kicks in: turning a feature on for 1% of traffic, watching error rates for an hour, then bumping it to 10% the next day. Rollgate handles this with sticky, deterministic bucketing — the same user always lands in the same bucket, so a user who sees the new feature at 5% keeps seeing it when you move to 50%.
You do not need to change your Node.js code when you change the rollout percentage. The rules live in the dashboard; your SDK pulls the new rules and evaluates them locally.
const showNewFlow = rollgate.isEnabled('checkout-v2', false, {
userId: user.id,
attributes: {
plan: user.plan,
region: user.region,
signupDate: user.signupDate,
},
});
The attributes you pass feed into targeting rules. A common pattern for B2B SaaS: enable a feature for all plan = "enterprise" users plus 10% of plan = "pro" users, with no rollout for free-tier. That is three rules in the dashboard, zero code changes.
If you are using flags for experimentation rather than safe releases, pair them with event tracking. The Node SDK exposes client.track() for conversion events, which plugs into A/B testing workflows.
Production Considerations
A flag system sits in the hot path of every request. That changes how you think about it.
Caching and evaluation mode. The SDK evaluates locally by default — rules are cached in memory and refreshed via SSE or polling. There is no network call on each isEnabled(). In a Node.js process with a hot path that evaluates flags thousands of times per second, this matters: network-dependent flag checks would wreck your P99.
Resilience. The SDK ships with a circuit breaker, retry-with-backoff, and a stale cache fallback. If the Rollgate API becomes unreachable, your service keeps serving flag evaluations using the last known rules — it does not hard-fail. You can subscribe to circuit-open and flags-stale events to surface this in your own monitoring:
rollgate.on('circuit-open', () => {
metrics.increment('rollgate.circuit.open');
});
rollgate.on('flags-stale', () => {
metrics.increment('rollgate.flags.stale');
});
Kill switches in production. Wrap risky code paths — a new payment provider, a rewritten algorithm, an external API integration — in a flag you can flip instantly. When something breaks, you want the shortest possible path from "we are paging" to "traffic is back on the old code." A flag flip takes under a second; a rollback deploy takes tens of minutes.
Process lifecycle. Always call rollgate.close() on shutdown. It closes the SSE connection, flushes pending telemetry, and lets Kubernetes or your PaaS roll pods cleanly. Skipping this leaks file descriptors and loses the last batch of evaluation analytics.
One client per process, not per request. The SDK is thread-safe and reusable. Do not instantiate a new RollgateClient per request — you will hit the API hard, leak connections, and lose the benefit of local caching. One long-lived client at app start, shut down on SIGTERM.
Best Practices
-
Name flags by feature, not by team.
new-checkoutages better thanbackend-team-q2-project. Future you will thank present you. -
Always pass a sensible default.
rollgate.isEnabled('feature', false)— false is usually the safe default (do not ship the new thing if we cannot decide). Explicit is better than surprising. - Retire flags. Once a rollout hits 100% and has been stable for a week, remove the flag from code. Zombie flags are a maintenance tax.
-
Log the evaluated value for high-stakes flags. If
isEnabled('new-payment-provider')returnedtruefor a user whose transaction failed, you want that in the log line, not inferred from the timestamp. - Separate experimentation flags from release flags. A kill switch for production should not expire when an experiment wraps up. Use different naming prefixes so they are easy to tell apart.
Next Steps
Feature flags in Node.js are a small change with an outsized impact. You stop shipping features and start shipping code, which means faster deploys, safer releases, and a rollback story that takes seconds instead of a pager rotation.
The Rollgate Node.js SDK is open source, 2KB gzipped, and works identically in Express, Fastify, Koa, NestJS, and any other Node.js framework. Local evaluation, SSE streaming, circuit breaker, and kill switches come in the box.
Read the full version with internal links to related guides on rollgate.io/blog/feature-flags-nodejs.
Top comments (0)