DEV Community

Sebastien Lato
Sebastien Lato

Posted on

SwiftUI A/B Testing Architecture (Experiments Without Chaos)

A/B testing is where good apps go wrong.

Not because experiments are bad —

but because they’re usually bolted on with:

  • random conditionals
  • UI flicker
  • inconsistent user assignment
  • polluted analytics
  • impossible-to-remove code

This post shows how to design a clean, deterministic A/B testing architecture in SwiftUI that:

  • produces trustworthy data
  • avoids UX glitches
  • stays testable
  • disappears cleanly when the experiment ends

🧠 The Core Principle

Experiments must be deterministic and invisible to the UI.

If the UI knows it’s in an experiment, you’re already leaking complexity.


🧱 1. Experiments Are Data, Not Flags

Do not reuse feature flags for experiments.

Instead:

enum Experiment {
    case onboardingLayout
    case pricingCopy
    case feedRanking
}
Enter fullscreen mode Exit fullscreen mode

And variants:

enum Variant {
    case control
    case treatmentA
    case treatmentB
}
Enter fullscreen mode Exit fullscreen mode

Experiments are about measurement, not safety.


📦 2. Experiment Assignment Service

protocol ExperimentProvider {
    func variant(for experiment: Experiment) -> Variant
}
Enter fullscreen mode Exit fullscreen mode

Concrete implementation:

  • assigns once
  • persists locally
  • stays stable across launches

🔐 3. Deterministic Bucketing

Never randomize on every launch.

func assignVariant(userID: String) -> Variant {
    hash(userID) % 2 == 0 ? .control : .treatmentA
}
Enter fullscreen mode Exit fullscreen mode

Rules:

  • same user → same variant
  • variant does not change mid-session
  • survives app restarts

Without this, your data is garbage.


🧭 4. Experiments Choose Features, Not Behavior

Correct:

func buildOnboarding() -> some View {
    switch experiments.variant(for: .onboardingLayout) {
    case .control:
        LegacyOnboarding()
    case .treatmentA:
        NewOnboarding()
    default:
        LegacyOnboarding()
    }
}
Enter fullscreen mode Exit fullscreen mode

Incorrect:

Text(experiment == .treatment ? "Try this!" : "Welcome")
Enter fullscreen mode Exit fullscreen mode

Experiments should swap entire implementations, not micro-conditionals.


🧬 5. No UI Flicker — Ever

Assignments must happen:

  • before first render
  • before navigation
  • before analytics fire

Never evaluate experiments inside:

  • body
  • .onAppear
  • animations

Experiment choice is part of startup configuration.


📊 6. Analytics Integration

Every event must include:

  • experiment name
  • variant
analytics.track(
    .buttonTapped,
    context: [
        "experiment": "onboarding_layout",
        "variant": "treatmentA"
    ]
)
Enter fullscreen mode Exit fullscreen mode

Never infer variants later.

If it’s not logged, it didn’t happen.


🧪 7. Testing Experiments

final class MockExperiments: ExperimentProvider {
    let variants: [Experiment: Variant]

    func variant(for experiment: Experiment) -> Variant {
        variants[experiment] ?? .control
    }
}
Enter fullscreen mode Exit fullscreen mode

You can now test:

  • control path
  • treatment path
  • rollback safety

⚠️ 8. Experiment Lifecycle Rules

Every experiment must have:

  • hypothesis
  • success metric
  • duration
  • owner
  • removal plan

When an experiment ends:

  • choose winner
  • delete experiment code
  • promote winning implementation
  • remove analytics hooks

Experiments must die quickly.


❌ 9. Common A/B Testing Anti-Patterns

Avoid:

  • random assignment per launch
  • UI-level branching
  • mixing flags and experiments
  • changing variants mid-session
  • running experiments without analytics
  • keeping experiment code forever

These invalidate results.


🧠 Mental Model

Think:

Startup
 → Assign Variant
   → Build Feature
     → Track Metrics
       → Decide
         → Delete Experiment
Enter fullscreen mode Exit fullscreen mode

Not:

“Let’s try both and see”


🚀 Final Thoughts

A correct A/B testing architecture gives you:

  • reliable data
  • confident decisions
  • clean code
  • fearless iteration
  • fast cleanup

Experiments should inform the product, not infect the codebase.

Top comments (0)