DEV Community

SoftwareDevs mvpfactory.io
SoftwareDevs mvpfactory.io

Posted on • Originally published at mvpfactory.io

Exit Offers and Paywall A/B Testing That Actually Moves Revenue

---
title: "Server-Driven Paywall A/B Testing That Actually Moves Revenue"
published: true
description: "Build server-driven paywalls with RevenueCat custom placements, feature flags for cohort targeting, platform-specific exit offers, and the statistical framework that tests revenue-per-user instead of conversion rate."
tags: kotlin, android, ios, architecture
canonical_url: https://blog.mvpfactory.co/server-driven-paywall-ab-testing-that-moves-revenue
---

## What We're Building

Let me show you a pattern I use in every project that involves subscription monetization: a server-driven paywall system where you control offer tiers, discount depth, copy, and exit-intent triggers — all without shipping an app update. We'll wire up RevenueCat custom placements, integrate feature flags for cohort assignment, implement exit offers on both Android and iOS, and set up the statistical framework that measures what actually matters.

## Prerequisites

- RevenueCat SDK configured in your Android/iOS project
- A feature flag service (LaunchDarkly or Statsig)
- Familiarity with Kotlin Coroutines and Swift async/await
- Google Play Billing Library 7 / StoreKit 2

## Step 1: The Server-Driven Pipeline

The architecture is straightforward. RevenueCat Offerings with Custom Placements feed into your feature flag service, which handles cohort assignment and payload delivery. The client fetches the placement config, renders the variant, tracks events, and measures LTV.

RevenueCat's custom placements let you define named paywall surfaces — `main_paywall`, `exit_offer`, `upgrade_nudge` — and map each to a specific offering remotely. Your client code stays thin:

Enter fullscreen mode Exit fullscreen mode


kotlin
val placement = Purchases.sharedInstance.getCustomPlacement("exit_offer")
val offering = placement?.availablePackages ?: return
// Render server-defined paywall variant


No hardcoded product IDs. No app update to test a new discount tier.

## Step 2: Platform-Specific Exit Offers

Exit offers fire when a user signals intent to leave the paywall. Here is the gotcha that will save you hours: detection differs significantly across platforms.

| Signal | Android | iOS |
|---|---|---|
| Back navigation | `OnBackPressedCallback` via `BackHandler` | `UIAdaptivePresentationControllerDelegate.presentationControllerDidAttemptToDismiss` |
| Swipe dismiss | N/A (back gesture covers this) | `UISheetPresentationController` delegate callbacks |
| Lifecycle timeout | `Lifecycle.Event.ON_PAUSE` after threshold | `viewWillDisappear` with timer validation |
| Trigger control | Server flag: `exit_offer_enabled` | Same flag, shared config |

On iOS with StoreKit 2, `isEligibleForIntroOffer` is async and user-specific. On Android with Play Billing Library 7, eligibility lives in `ProductDetails.SubscriptionOfferDetails`. You must pre-fetch eligibility *before* showing the exit offer. A 300ms delay on an exit intent screen kills the interaction.

## Step 3: The Right Primary Metric

The docs don't mention this, but most teams test conversion rate and ship the "winner" — then watch revenue stay flat. Consider:

| Variant | Conversion Rate | Avg Discount | Revenue Per User |
|---|---|---|---|
| A (no discount) | 3.2% | 0% | $1.92 |
| B (50% off annual) | 5.8% | 50% | $1.45 |

Variant B "wins" on conversion. Variant A generates 32% more revenue per user exposed. Your primary metric should be **revenue-per-user (RPU)**: total revenue divided by total users exposed, including non-converters.

RPU has high variance (CV ~3–5x for typical subscription apps). For a 10% RPU lift at 80% power and 95% confidence, expect needing **5,000–10,000 users per variant minimum**. Use sequential testing (Bayesian credible intervals or O'Brien-Fleming spending functions) to avoid the peeking problem, which inflates false positives from 5% to over 25%. Statsig handles this natively.

## Step 4: Cohort Isolation

For apps with smaller user bases — I run into this with niche productivity tools like [HealthyDesk](https://play.google.com/store/apps/details?id=com.healthydesk), a break reminder and desk exercise app I built for developers — experiment contamination is a real risk. A user who sees the exit offer in one session and the control in another pollutes both cohorts.

Assign cohorts at the user level and persist in RevenueCat subscriber attributes:

Enter fullscreen mode Exit fullscreen mode


kotlin
Purchases.sharedInstance.setAttributes(
mapOf("experiment_cohort" to flagService.getCohort(userId))
)


## Step 5: Event Taxonomy

Here is the minimal setup to get this working — your pipeline needs these events to close the loop:

| Event | Key Properties | Purpose |
|---|---|---|
| `paywall_impression` | `placement_id`, `variant`, `cohort` | RPU denominator |
| `exit_offer_triggered` | `trigger_type`, `variant` | Exit funnel tracking |
| `purchase_initiated` | `product_id`, `offer_type`, `discount_pct` | Conversion + discount depth |
| `purchase_completed` | `revenue`, `currency`, `is_trial` | Revenue attribution |
| `subscription_renewed` | `period`, `revenue` | LTV calculation |

Without `discount_pct` on the purchase event, you cannot decompose whether a revenue change came from volume or price. Non-negotiable.

## Gotchas

- **Testing conversion rate alone is misleading.** When discount depth varies across variants, conversion rate decouples from revenue. Wire RPU as your primary metric from day one.
- **Pre-fetch offer eligibility before exit triggers fire.** StoreKit 2 and Play Billing Library 7 handle eligibility differently. Cache it when the paywall loads, not when the exit offer appears.
- **Session-level cohort assignment destroys experiments.** Persist assignments in RevenueCat subscriber attributes and enforce across sessions. For small-audience apps, contamination will kill statistical power faster than insufficient sample size.
- **Peeking at results daily** inflates your false positive rate from 5% to over 25%. Use sequential testing or commit to a fixed sample size up front.

## Wrapping Up

Server-driven paywalls give you the iteration speed to test what matters: revenue per user, not conversion theater. Keep the client thin, let RevenueCat and your feature flag service own the presentation logic, and build your event taxonomy to connect impressions all the way through to LTV. The teams that get this pipeline right compound gains every sprint.
Enter fullscreen mode Exit fullscreen mode

Top comments (0)