The A/B Testing Trap
Most developer tools ship A/B tests for the wrong things:
button colors, heading copy, image placement.
The tests that move revenue: pricing pages, onboarding flows, upgrade prompts.
Here's a minimal A/B testing implementation for Next.js that focuses on what matters.
Feature Flags with Edge Config
// Vercel Edge Config for instant flag updates (no deploy)
import { get } from '@vercel/edge-config'
type Variant = 'control' | 'treatment'
export async function getVariant(
flagName: string,
userId: string
): Promise<Variant> {
const flagEnabled = await get<boolean>(flagName)
if (!flagEnabled) return 'control'
// Stable assignment: same user always gets same variant
const hash = await stableHash(`${flagName}:${userId}`)
return hash % 2 === 0 ? 'control' : 'treatment'
}
async function stableHash(str: string): Promise<number> {
const encoder = new TextEncoder()
const data = encoder.encode(str)
const hashBuffer = await crypto.subtle.digest('SHA-256', data)
const hashArray = Array.from(new Uint8Array(hashBuffer))
return hashArray[0] // First byte: 0-255
}
Cookie-Based Variant Persistence
// middleware.ts -- assign variant on first visit, persist in cookie
import { NextRequest, NextResponse } from 'next/server'
export function middleware(request: NextRequest) {
const response = NextResponse.next()
// Assign pricing experiment variant
if (!request.cookies.has('pricing-variant')) {
const variant = Math.random() < 0.5 ? 'control' : 'treatment'
response.cookies.set('pricing-variant', variant, {
maxAge: 30 * 24 * 60 * 60, // 30 days
httpOnly: true,
sameSite: 'strict'
})
}
return response
}
export const config = { matcher: '/pricing' }
The Pricing Page Test
// app/pricing/page.tsx
import { cookies } from 'next/headers'
export default function PricingPage() {
const variant = cookies().get('pricing-variant')?.value ?? 'control'
const prices = {
control: { monthly: 29, annual: 19, label: '$29/month' },
treatment: { monthly: 39, annual: 25, label: '$39/month' },
}[variant]
return (
<PricingLayout>
<PriceCard
price={prices.monthly}
label={prices.label}
stripeLink={variant === 'control'
? 'https://buy.stripe.com/control_link'
: 'https://buy.stripe.com/treatment_link'}
/>
</PricingLayout>
)
}
Tracking Conversions
// Log which variant converted
// Triggered by Stripe webhook on successful payment
async function logConversion(stripeSessionId: string, amount: number) {
// Get variant from session metadata (set at checkout creation)
const session = await stripe.checkout.sessions.retrieve(stripeSessionId)
const variant = session.metadata?.pricingVariant ?? 'unknown'
await db.experiment.create({
data: {
name: 'pricing-2026-q2',
variant,
event: 'conversion',
amount,
userId: session.metadata?.userId
}
})
}
// Query results:
const results = await db.experiment.groupBy({
by: ['variant'],
_count: { id: true },
_sum: { amount: true },
where: { name: 'pricing-2026-q2', event: 'conversion' }
})
// control: { count: 12, revenue: $348 }
// treatment: { count: 9, revenue: $351 }
// Too early to call -- need 100+ conversions per variant
What to Test (In Priority Order)
High impact (test these):
1. Price points ($19 vs $29 vs $49)
2. Annual vs monthly default
3. Free trial vs no trial
4. Upgrade prompt timing and copy
5. Onboarding step count
Low impact (skip for now):
- Button colors
- Hero image
- Testimonial order
- Font choices
- Footer layout
Statistical significance needs ~100 conversions per variant.
Don't call a winner on 10 sales.
A/B Testing with Atlas
I run pricing tests on whoffagents.com -- currently testing
$29 vs $39 for the MCP Security Scanner.
Results will be shared publicly once statistical significance is reached.
The AI SaaS Starter Kit ships with the variant assignment and cookie patterns above.
$99 one-time at whoffagents.com
Top comments (0)