DEV Community

Rishi
Rishi

Posted on

Commit–Reveal Isn’t Enough: Designing Provably Fair Systems That Actually Survive Verification

"Provably fair" has become a common label in systems that generate public outcomes — games, raffles, lotteries, NFT mints, even allocation engines.

Most implementations follow the same pattern:

  1. Server generates a secret seed
  2. Server publishes a hash of the seed
  3. Later, the seed is revealed
  4. Users verify the hash matches

This prevents the server from changing the seed after publishing the commitment.

And that's where most systems stop.

The problem? That only prevents one very specific type of cheating. It does not eliminate trust.

If you're building a system that claims to be provably fair, here's what actually matters at the architectural level.


1. The Real Enemy: Timing Asymmetry

Fairness failures almost never come from broken cryptography.

They come from who moves last.

Consider a simplified flow:

  1. Server commits: H(server_seed)
  2. User sends user_seed
  3. Server reveals server_seed
  4. Outcome = f(server_seed, user_seed)

Looks fine.

But notice something subtle:

  • The server chose its seed before seeing the user seed.
  • The user chose their seed after seeing the commitment — but not the actual value.

Now flip it:

  1. Server commits: H(server_seed)
  2. User reveals user_seed
  3. Server decides whether to proceed

If the server can abort silently after seeing the user's input, it can simulate outcomes offline and only continue favorable rounds.

No hash is broken.
No cryptography is violated.
But fairness is compromised.

Architectural Rule #1:
No party should be able to influence continuation after observing partial entropy.

That's a protocol design constraint — not a hashing problem.


2. Commit–Reveal Is Necessary — Not Sufficient

A single-sided commitment only prevents post-hoc manipulation.

It does not prevent:

  • Abort-and-retry attacks
  • Selective round publication
  • Pre-computation and discarding
  • Biased entropy timing

A stronger pattern is double-blind commitment:

  1. Server commits: H(server_seed)
  2. User commits: H(user_seed)
  3. Server reveals server_seed
  4. User reveals user_seed
  5. combined = H(server_seed || user_seed)
  6. outcome = deterministic_map(combined)

Now neither party sees the other's entropy before locking in their own.

You've removed "last-move advantage".

That's a protocol improvement — not a cosmetic change.


3. Determinism Is Non-Negotiable

Even with perfect entropy symmetry, systems fail if the mapping function isn't strictly deterministic.

Common anti-patterns:

if (result < 0.01) {
result = reroll();
}

OR

while (value > max) {
value = hash_again();
}

Or hidden "edge-case" handling buried in business logic.

If running the same inputs twice can produce different outputs, verification becomes probabilistic instead of mathematical.

If your system uses Math.random() anywhere in the verification path, it is not verifiable.

Architectural Rule #2:
The outcome function must be pure, stateless, and deterministic.

Given identical inputs, it must produce identical outputs — forever.

No retries.
No hidden branching.
No external state.


4. Range Mapping Is Where Fairness Dies Quietly

Even with strong entropy, biased mapping breaks fairness.

The classic example:

roll = random_value % 10;

If random_value comes from a space not evenly divisible by 10, some results will be slightly more likely.

This is called modulo bias.

It's subtle.
It passes casual inspection.
But over time, distribution skews measurably.

Correct range mapping requires:

  • Rejection sampling
  • Bit slicing with proper bounds
  • Careful handling of entropy exhaustion

Architectural Rule #3:
Entropy extraction and range mapping must be mathematically uniform.

Most "provably fair" implementations fail here — not at hashing.


5. Public Entropy Isn't a Silver Bullet

Some systems attempt to strengthen fairness by mixing in public entropy:

  • Block hashes
  • Randomness beacons
  • External APIs

This can improve robustness — but only if done correctly.

Common mistake:

combined = H(server_seed || block_hash)

Where the server chooses which block hash to use after observing it.

That reintroduces timing control.

For public entropy to add security:

  1. The round must commit to a future entropy source before the entropy is published. Otherwise, selection bias reappears.
  2. It must be independently verifiable
  3. It must be combined deterministically
  4. It must not be optional

Otherwise, it's theater.


6. The Offline Verification Test

Here's a simple litmus test for any "provably fair" system:

Can a third party verify an outcome offline, years later, using only:

  • the published commitments
  • the revealed seeds
  • the documented algorithm

If verification requires:

  • an API call
  • platform permission
  • special tooling
  • hidden parameters

Then the system still depends on trust.

Architectural Rule #4:
Verification must be permissionless and reproducible without infrastructure.

If your server disappears, verification should still work.

That's the difference between auditability and availability.


7. Logs and Non-Repudiation

Even perfect randomness fails if rounds can disappear.

If a system can:

  • Drop unfavorable rounds
  • Reorder round IDs
  • Omit commitments
  • Retroactively insert data

Then fairness becomes unverifiable.

Strong designs include:

  • Sequential round identifiers
  • Immutable public logs
  • Time-anchored commitments
  • Transparent lifecycle states

Fairness is not just about randomness — it's about event integrity.


8. What "Provably Fair" Actually Requires

At the architecture level, a robust system needs:

  1. Symmetric entropy commitment (no last-move advantage)
  2. Deterministic entropy combination
  3. Pure outcome mapping
  4. Uniform range extraction
  5. Immutable round logging
  6. Offline reproducibility

Remove any one of these, and trust creeps back in.

And once trust re-enters, the system becomes reputation-based — not proof-based.


9. Why Systems Eventually Get Caught

Most fairness failures aren't dramatic.

They're statistical.

  • Slight edge-case skew
  • Missing commitments
  • Biased modulo mapping
  • Silent retries

Over thousands of rounds, small biases compound.

Cheating requires perfection forever.
Verification requires one curious engineer with a script.

Time favors math.


Final Thought

Provable fairness is not about adding a hash and calling it secure.

It's about designing protocols where:

  1. no one gets the last move
  2. entropy is symmetric
  3. algorithms are deterministic
  4. mapping is uniform
  5. logs are immutable
  6. verification is independent

If verification requires trust, it isn't provably fair.

It's just harder to cheat.

And engineering systems that are merely "hard to cheat" is very different from engineering systems that are impossible to cheat without detection.

One is security through friction.
The other is security through math.


A Practical Example

If you're curious what this looks like in practice, I built a minimal live demo of a provably fair dice roll using commit–reveal combined with future-time-locked drand entropy.

You can inspect the inputs, recompute the hashes, and verify the result independently.

Demo: https://blockrand.net/live.html

Source: https://github.com/blockrand-api/blockrand-js

No APIs required for verification — everything is reproducible client-side.

Top comments (0)