In this article, you will learn:
- What feature flags are and why they matter for production releases
- The four flag types Kinde supports and when to use each one
- How Kinde delivers flag values through the auth token — no extra SDK required
- How to create your first feature flag in the Kinde dashboard
- How to read flags in a Next.js application using the Kinde SDK
- How to override flags per environment, per organization, and per user
- How to use flags for gradual AI feature rollouts
- How to use string flags for A/B testing UI copy
- How to use integer flags to control per-user AI token limits
- How to use JSON flags for complex feature configurations
- The flag lifecycle: creation, rollout, cleanup
Let's dive in!
Why Feature Flags Exist
Every developer has shipped a feature to production and immediately wished they had not. Something works fine in staging. It breaks in production. Users are affected. A rollback means a redeployment — and redeployments take time, require coordination, and sometimes break other things in the process.
Feature flags decouple deployment from release. You merge the code, you deploy it, but the feature stays hidden behind a flag set to false. When you are ready to release — or when you want to release to just 10 users first — you flip the flag. No redeployment. No code change. Instant.
For AI products specifically, this matters more than for most software. AI features have a different risk profile. A new model might behave unexpectedly under production load. A new AI agent capability might produce output you did not anticipate. A new prompting strategy might increase latency. You want to be able to release these changes to a subset of users, monitor behavior, and either expand the rollout or kill the feature with a single dashboard toggle.
That is exactly what Kinde's feature flags give you — and unlike standalone feature flag tools, Kinde's flags live in the same auth token as your user's identity and permissions. No separate SDK. No separate API call. No synchronization problem between your auth state and your flag state.
How Kinde Feature Flags Work
Most feature flag services require you to install a separate SDK, make an API call at runtime to fetch flag values, and cache them in localStorage or memory. This adds latency, adds a dependency, and creates a potential desync between what your auth system knows about the user and what your flag system thinks.
Kinde takes a different approach. Flag values are embedded directly in the JWT access token under the feature_flags claim. Every time the user's token refreshes, their flag values refresh with it. No separate call. No extra SDK. The flags arrive with the auth token.
Here is what that claim looks like in a real Kinde access token:
{
"sub": "kp_abc123",
"email": "alice@example.com",
"feature_flags": {
"new_ai_model": { "t": "b", "v": false },
"ai_token_limit": { "t": "i", "v": 1000 },
"prompt_variant": { "t": "s", "v": "control" },
"model_config": { "t": "j", "v": { "temperature": 0.7, "max_tokens": 500 } }
},
"permissions": ["reports:view"],
"org_code": "org_xyz789"
}
The "t" field is the type ("b" = boolean, "i" = integer, "s" = string, "j" = JSON) and "v" is the value. The Kinde SDK wraps this cleanly so you never have to parse the raw token.
Note: Kinde feature flags are completely free on every plan, for every user.
The Four Flag Types
Kinde supports four flag types. The type is set at creation and cannot be changed later — the key also cannot be changed, so choose carefully before saving.
Boolean is the classic on/off switch. Use it for anything that is either enabled or disabled: a new feature, a beta programme, a kill switch for a misbehaving AI capability.
Flag key: new_ai_model
Type: Boolean
Default: false
Use case: Hide the new GPT-5 integration until ready for gradual rollout
String passes configuration values as text. Use it for A/B testing copy, selecting between variants of a feature, or setting environment-specific values.
Flag key: prompt_variant
Type: String
Default: "control"
Use case: Test different AI prompt strategies: "control" vs "chain_of_thought" vs "few_shot"
Integer passes numeric limits. Use it to set per-user or per-organization quotas, limits, or thresholds.
Flag key: ai_token_limit
Type: Integer
Default: 1000
Use case: Control how many AI tokens each user can consume per session
JSON passes structured configuration objects. Use it for complex feature configurations that involve multiple related values.
Flag key: model_config
Type: JSON
Default: { "temperature": 0.7, "max_tokens": 500, "model": "gpt-4o" }
Use case: Send the full AI model configuration to the app without a redeployment
| Type | SDK method | Best for |
|---|---|---|
| Boolean | getBooleanFlag |
Feature on/off, kill switches, beta access |
| String | getStringFlag |
A/B variants, copy testing, environment labels |
| Integer | getIntegerFlag |
Rate limits, quotas, timeouts, seat counts |
| JSON |
getFlag (with type) |
Model configs, complex feature settings |
Step #1: Create a Feature Flag in Kinde
Navigate to Releases → Feature flags → Add feature flag in your Kinde dashboard.
Fill in the flag details:
Name: A human-readable label, for example "New AI Model". This can be changed later.
Key: The identifier your code uses, for example new_ai_model. Use lowercase with underscores. This cannot be changed after creation.
Type: Select one of Boolean, String, Integer, or JSON. This also cannot be changed after creation.
Default value: The value every user gets unless overridden. For a new feature you are not ready to release, the default is false for Boolean.
Allow overrides: Select whether the flag value can be overridden per environment, per organization, or per user. Enable organizations — you will want the flexibility.
Select Save. The flag now exists with a global default of false. Every user in every organization gets false until you explicitly enable it somewhere.
Wonderful! The flag is created and safely off. Now wire it into your application.
Step #2: Read Flags in a Next.js Application
The Kinde Next.js SDK exposes flag methods through getKindeServerSession() on the server and useKindeBrowserClient() on the client. For most production use cases, read flags server-side.
// app/dashboard/page.tsx
import { getKindeServerSession } from "@kinde-oss/kinde-auth-nextjs/server";
import { redirect } from "next/navigation";
export default async function DashboardPage() {
const {
isAuthenticated,
getUser,
getBooleanFlag,
getIntegerFlag,
getStringFlag,
} = getKindeServerSession();
if (!(await isAuthenticated())) {
redirect("/api/auth/login");
}
const user = await getUser();
// Read each flag with a default value as fallback
// The default value is used if the flag doesn't exist in the token
const showNewAIModel = await getBooleanFlag("new_ai_model", false);
const aiTokenLimit = await getIntegerFlag("ai_token_limit", 1000);
const promptVariant = await getStringFlag("prompt_variant", "control");
return (
<main className="p-8">
<h1>Dashboard</h1>
{/* Only render the new AI section if the flag is on */}
{showNewAIModel && (
<section className="mt-6 rounded-lg bg-purple-50 p-6">
<h2 className="font-semibold text-purple-900">
✨ New AI Model (Beta)
</h2>
<p className="text-sm text-purple-700 mt-1">
You have early access to our upgraded AI model.
</p>
<AIModelInterface
tokenLimit={aiTokenLimit}
promptVariant={promptVariant}
/>
</section>
)}
{/* Standard content visible to all users */}
<section className="mt-6">
<h2 className="font-semibold">Standard Features</h2>
<StandardInterface tokenLimit={aiTokenLimit} />
</section>
</main>
);
}
// Placeholder components
function AIModelInterface({
tokenLimit,
promptVariant,
}: {
tokenLimit: number;
promptVariant: string;
}) {
return (
<div className="mt-4">
<p className="text-xs text-purple-500">
Token limit: {tokenLimit} | Variant: {promptVariant}
</p>
</div>
);
}
function StandardInterface({ tokenLimit }: { tokenLimit: number }) {
return <p className="text-sm text-gray-500">Token limit: {tokenLimit}</p>;
}
Note: getBooleanFlag("new_ai_model", false) takes the flag key and a default value. If the flag does not exist in the user's token for any reason (flag not created yet, SDK version mismatch), the default value is returned. This makes your application safe to deploy before the flag exists in Kinde.
Step #3: Override Flags Per Environment
The most common override pattern: the new feature is false globally, but true in your staging environment so your team can test it before any users see it.
Navigate to Settings → Environment → Feature flags (make sure you are in your staging environment, not production).
Find new_ai_model and select Edit value. Switch it to true. Select Save.
Now everyone in your staging environment sees the new AI feature. Everyone in production still sees false. Your team tests it, confirms it works, and when ready you flip it on in production — no code change, no redeployment.
The cascade order for flag evaluation is:
Business default → Environment override → Organization override → User override
The most specific level wins. A user override takes precedence over everything. An organization override takes precedence over the environment. The business default is the fallback when nothing else is set.
Step #4: Override Flags Per Organization for Gradual Rollouts
For B2B AI products, you often want to enable a new feature for specific customer organizations before rolling it out globally. An org-level override is exactly how you do this.
Navigate to Organizations → select your pilot customer's org → Feature flags.
Select Edit value next to secondn_ai_model and set it to true. That organization's users now see the new AI feature. Every other organization still inherits the false default.
This is gradual rollout in practice. Your rollout sequence for a significant AI feature might look like this:
Week 1: Enable for your own organization (internal testing)
Week 2: Enable for 2-3 design partner organizations (trusted early adopters)
Week 3: Enable for 10% of organizations (broader beta)
Week 4: Set business default to true (full rollout) and remove org overrides
No deployments. No feature branches. No big-bang release. Each step is a toggle in the Kinde dashboard.
Step #5: Override Flags Per User for Beta Access
Sometimes you want to give specific individuals early access — a power user who has asked for beta features, a journalist reviewing the product, or your own test account.
Navigate to Users → select the user → Feature flags.
Find third_ai_model and select Edit value. Set it to true. This user now has the new AI feature regardless of which organization they belong to and regardless of the environment default.
Note: User-level overrides apply across all organizations and environments for that user. They are the most specific level and cannot be further scoped. If you want to give a user access in one org but not another, use org-level overrides instead.
Step #6: Use String Flags for AI Prompt A/B Testing
String flags unlock a particularly powerful pattern for AI products: testing different prompt strategies without redeployment.
Create a flag called ai_prompt_strategy of type String with a default value of "standard". Enable environment and organization overrides.
In your API route that calls the AI:
// app/api/ai/chat/route.ts
import { getKindeServerSession } from "@kinde-oss/kinde-auth-nextjs/server";
import { NextRequest, NextResponse } from "next/server";
// Prompt templates for each variant
const PROMPT_STRATEGIES = {
standard: (userMessage: string) =>
`You are a helpful assistant. User: ${userMessage}`,
chain_of_thought: (userMessage: string) =>
`You are a helpful assistant. Think step-by-step before answering.
User: ${userMessage}
Let me work through this carefully:`,
few_shot: (userMessage: string) =>
`You are a helpful assistant. Here are examples of good responses:
Q: What is 2+2? A: 4. Q: What is the capital of France? A: Paris.
Now answer: ${userMessage}`,
} as const;
type PromptStrategy = keyof typeof PROMPT_STRATEGIES;
export async function POST(request: NextRequest) {
const { isAuthenticated, getStringFlag, getIntegerFlag } =
getKindeServerSession();
if (!(await isAuthenticated())) {
return NextResponse.json({ error: "Unauthenticated" }, { status: 401 });
}
// Read the prompt strategy flag for this user
const strategy = (await getStringFlag(
"ai_prompt_strategy",
"standard"
)) as PromptStrategy;
// Read the token limit flag for this user
const maxTokens = await getIntegerFlag("ai_token_limit", 1000);
const { message } = await request.json();
// Build the prompt based on the user's assigned variant
const promptBuilder = PROMPT_STRATEGIES[strategy] ?? PROMPT_STRATEGIES.standard;
const prompt = promptBuilder(message);
// Call your AI provider with the variant prompt and limit
const aiResponse = await callAIProvider({
prompt,
maxTokens,
// Log which variant was used for analytics
metadata: { strategy, userId: "from_session" },
});
return NextResponse.json({
response: aiResponse,
// Optionally surface the variant in the response for debugging
_variant: process.env.NODE_ENV === "development" ? strategy : undefined,
});
}
async function callAIProvider({
prompt,
maxTokens,
metadata,
}: {
prompt: string;
maxTokens: number;
metadata: Record<string, string>;
}) {
// Replace with your actual AI provider call
console.log("Calling AI with strategy:", metadata.strategy);
return "AI response placeholder";
}
To run the A/B test: set ai_prompt_strategy to "chain_of_thought" for half your organizations in Kinde's org overrides. The other half inherits "standard". Monitor which variant produces better outcomes in your analytics. When you have a winner, update the business default to the winning strategy and remove the overrides.
Step #7: Use Integer Flags for AI Token Limits
AI API costs are directly proportional to token consumption. Integer flags let you set per-user or per-plan token limits without redeploying when you need to adjust them.
Create a flag called ai_token_limit of type Integer with a default value of 1000. Enable all overrides.
Use it in your AI route handler:
// In your AI route handler — enforce the token limit before calling the AI
const maxTokens = await getIntegerFlag("ai_token_limit", 1000);
// Validate the requested tokens against the user's limit
const requestedTokens = body.max_tokens ?? 500;
const enforcedTokens = Math.min(requestedTokens, maxTokens);
if (requestedTokens > maxTokens) {
console.log(
`User requested ${requestedTokens} tokens, capped to ${maxTokens}`
);
}
// Use enforcedTokens in your AI call
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: prompt }],
max_tokens: enforcedTokens,
});
Now adjust limits per organization without touching code:
- Free plan orgs: set
ai_token_limitto500via org override - Pro plan orgs: inherit the default of
1000 - Enterprise orgs: set
ai_token_limitto5000via org override - Your own internal org: set
ai_token_limitto10000for testing
When your infrastructure changes and you need to globally reduce the default limit, change the business default in the Kinde dashboard. Every user's limit updates at their next token refresh — no deployment.
Step #8: Use JSON Flags for Complex AI Configurations
When multiple related configuration values need to change together, JSON flags keep them atomic. If you use separate flags for each value, you can end up in a state where some values have been updated and others have not, which can cause subtle bugs.
Create a flag called ai_model_config of type JSON with a default value:
{
"model": "gpt-4o",
"temperature": 0.7,
"max_tokens": 500,
"system_prompt": "You are a helpful AI assistant."
}
Enable environment and organization overrides. Now use it in your application:
// app/api/ai/generate/route.ts
import { getKindeServerSession } from "@kinde-oss/kinde-auth-nextjs/server";
import { NextRequest, NextResponse } from "next/server";
interface AIModelConfig {
model: string;
temperature: number;
max_tokens: number;
system_prompt: string;
}
const DEFAULT_CONFIG: AIModelConfig = {
model: "gpt-4o",
temperature: 0.7,
max_tokens: 500,
system_prompt: "You are a helpful AI assistant.",
};
export async function POST(request: NextRequest) {
const { isAuthenticated, getFlag } = getKindeServerSession();
if (!(await isAuthenticated())) {
return NextResponse.json({ error: "Unauthenticated" }, { status: 401 });
}
// Read the JSON flag — use getFlag with type "j" for JSON
const configFlag = await getFlag(
"ai_model_config",
DEFAULT_CONFIG,
"j" // "j" = JSON type
);
// configFlag.value contains the parsed JSON object
const config = (configFlag?.value as AIModelConfig) ?? DEFAULT_CONFIG;
const { message } = await request.json();
// Use the full config from the flag — one atomic update in Kinde
// changes model, temperature, max_tokens, and system_prompt together
const response = await callAI({
model: config.model,
temperature: config.temperature,
maxTokens: config.max_tokens,
systemPrompt: config.system_prompt,
userMessage: message,
});
return NextResponse.json({ response });
}
async function callAI({
model,
temperature,
maxTokens,
systemPrompt,
userMessage,
}: {
model: string;
temperature: number;
maxTokens: number;
systemPrompt: string;
userMessage: string;
}) {
// Your AI provider call using the flag-configured values
console.log(`Using model: ${model}, temp: ${temperature}, max_tokens: ${maxTokens}`);
return "AI response placeholder";
}
When you want to test a newer model with different parameters, update the JSON flag value for your staging environment. If it works well, update the production environment override. If there is a problem, revert the JSON flag — all four values revert atomically.
Putting It All Together
Here is a complete rollout lifecycle for a major AI feature using flags:
Phase 1 — Development (flag default: false):
Code is deployed behind the flag. Nothing changes for users. Your team iterates freely.
Phase 2 — Internal testing (environment override: staging true):
The feature is live in staging. Your team tests it thoroughly against real data. Production users are unaffected.
Phase 3 — Beta access (user overrides: specific users true):
You enable the flag for 5 power users and your own account. Collect feedback. Fix issues without touching code.
Phase 4 — Gradual rollout (org overrides: pilot orgs true):
Enable for 3 design partner organizations. Monitor AI token consumption, error rates, user satisfaction. Expand to 20% of orgs if metrics look good.
Phase 5 — Full release (business default: true):
Update the global default to true. All remaining users see the feature. Remove org-level overrides — they are no longer needed.
Phase 6 — Cleanup (delete the flag):
Once the feature is stable and you are confident you will not need to roll back, remove the flag from your code and delete it from Kinde. Temporary flags are technical debt. A flag that controls a permanent feature state can stay; a flag that was used to control a release should go.
Flag Naming Conventions
A flag library grows fast. Without conventions it becomes unmanageable. Use these patterns from the start:
Prefix by category:
ai_* AI feature flags (ai_new_model, ai_token_limit, ai_prompt_strategy)
ui_* UI and design flags (ui_dark_mode, ui_new_nav)
billing_* Billing and plan flags (billing_annual_discount)
beta_* Beta features (beta_voice_input, beta_export_pdf)
Use positive names for booleans: show_new_dashboard not hide_old_dashboard. Double negatives in code (if (!hideOldDashboard)) cause bugs.
Use descriptive names for strings and integers: ai_model_name not model, ai_max_tokens_per_request not limit.
Document the flag purpose in the description field. Kinde lets you add a description when creating the flag. Write what the flag does, what the rollout plan is, and when it should be deleted. Future you will appreciate it.
Conclusion
In this article, you learned how to use Kinde feature flags to safely roll out features in production — using all four flag types, managing overrides at the environment, organization, and user levels, and running a phased rollout from internal testing to full release without a single redeployment.
The key insight that makes Kinde's approach different: flags are delivered in the auth token, not fetched separately. The same token that authenticates your user also carries their flag values, their permissions, and their organization context. One integration. One token. Everything your application needs to make runtime decisions.
Kinde is free for up to 10,500 monthly active users, no credit card required. Feature flags are available on every plan. Create your account at kinde.com and start shipping code with confidence.





![Kinde Organizations > [Org name] > Feature flags page showing the list of flags and their values for this specific organization](https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6e1gwdozp7mz1btulayn.png)
![Kinde Users > [User name] > Feature flags tab showing the list of all flags and the user's current values. The](https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsmqmwsh3i2kg049bvk29.png)

Top comments (0)