The Problem Nobody Wants to Solve
A company is running out of money. The runway is eight months. The board says cut costs or die.
The default answer is layoffs. Pick 87 people. Walk them out. The math works: fewer salaries, longer runway. But the people who stay carry survivor's guilt, institutional knowledge walks out the door, and the company that was supposed to be "family" just proved it wasn't.
We asked a different question: what if everyone took a small, temporary pay cut instead?
Not forced. Not uniform. Each person declares the maximum percentage they're willing to give. An algorithm distributes the burden fairly, respects every individual's limit, and extends the runway. Nobody loses their job.
This is Seuil. French for "threshold." The threshold where individual sacrifice becomes collective strength.
The Constraint That Makes It Hard
Here's the thing about this algorithm. It has one rule that cannot bend:
∀i : adjustment_i ≤ declared_threshold_i
Every person's adjustment must be less than or equal to what they consented to. Not approximately. Not on average. For every single employee, every single time, without exception.
If the algorithm ever violates this, even by a fraction of a percent for one person in one run, the entire system loses legitimacy. You can't ask people to trust a salary adjustment tool that sometimes overrides their consent.
This constraint turns what looks like a simple optimization problem into something genuinely interesting. You're maximizing headcount retention subject to hard consent constraints, a savings floor, fairness requirements across tiers and departments, and the reality that people change their minds mid-plan.
The Algorithm: Iterative Clamping
The core of Seuil's Rempart engine is a weighted proportional allocation with iterative clamping. Here's the intuition.
Each employee gets a "burden weight" based on the active fairness mode. In executive-heavy mode, executives get a 2x weight. In equalized mode, everyone gets the same weight. In critical-protection mode, employees with high criticality scores get lower weights.
The algorithm finds a single scale factor, λ (lambda), such that:
λ = target_savings / Σ(salary_i × weight_i)
Each employee's adjustment is λ × weight_i. Simple. But some employees will exceed their declared threshold at this λ. So we clamp them at their ceiling and remove them from the active set. This reduces the denominator, which increases λ for the remaining employees. Some of them now exceed their thresholds. Clamp again. Repeat.
The trick that makes this fast: sort employees by ceiling / weight ascending before starting. Then the clamping loop is a single linear scan. Employees who clamp first are at the front of the sorted list. You walk forward, clamping and accumulating, until you find the first employee who fits under the current λ. Everyone after that fits too. Done.
Total complexity: O(n log n) for the sort, O(n) for the scan. For 1,240 employees, this runs in about 3ms. For 100,000, about 10ms.
Why Rust. Why Integers.
The first prototype was TypeScript. It worked. 15 test cases passed. The simulator dashboard used it directly via useMemo. But TypeScript has a problem for financial computing: IEEE 754.
Floating point arithmetic is not associative. (a + b) + c is not always equal to a + (b + c). When you're summing salary adjustments across a thousand employees, the order of operations affects the result. The same input can produce different outputs depending on how the JavaScript engine optimizes the computation. And the rounding errors accumulate.
For a system where people's livelihoods depend on the math, "close enough" isn't.
So we rebuilt in Rust with integer arithmetic throughout. Every monetary value is stored as i64 cents. Every percentage is stored as u16 basis points (hundredths of a percent). The clamping comparison uses u128 cross-multiplication to avoid division entirely:
// Does this employee need clamping?
// remaining × weight × 10000 > active_wp × ceiling
let lhs = remaining_cents as u128 * item.weight_millionths as u128 * 10000;
let rhs = active_wp as u128 * item.ceiling_bps as u128;
if lhs > rhs { /* clamp */ }
No floating point touches the solver. The API layer converts between f64 JSON (what the frontend speaks) and integer core (what the engine computes) at a single boundary. Inside the engine, it's integers all the way down.
This gave us something that floating point never could: determinism. The same input produces the exact same output on every platform, every run, every time. The 100-run determinism test doesn't check for "close enough." It checks for bit-identical results.
How We Test: Everything TigerBeetle Taught Us
TigerBeetle is a financial transactions database that tests itself with a VOPR (Viewstamped Operation Replicator), a fuzzer that generates random operation sequences and checks invariants after every single operation. Their philosophy: if a financial system can be broken by any sequence of valid operations, it will be broken by real users. Find it first.
We adopted this wholesale.
The VOPR
Our VOPR generates random sequences of operations: solves, mass declines, threshold changes, fairness mode switches, target adjustments. After each operation, it checks seven invariants:
- Consent inviolability. No adjustment exceeds any threshold.
- Conservation of savings. No money created or destroyed by rounding.
- Monotonic feasibility. More participation never makes things less feasible.
- Determinism. Same inputs always produce same outputs.
- Fairness ordering. Equalized mode always produces lower Gini than executive-heavy.
- Rebalance convergence. Any sequence of accepts/declines produces a valid state.
- No phantom money. Reported totals match the sum of individual contributions.
We run 10,000 sequences at a time. Each sequence is 10 to 50 operations on a randomly generated company of 100 to 2,000 employees. That's roughly 350,000 operations with invariant checking after every single one.
The first version ran at 264 operations per second. That's where TigerBeetle's other lesson kicked in.
TigerStyle: Zero Allocation in the Hot Path
TigerBeetle pre-allocates all memory at startup and never allocates during operation. We were doing the opposite: cloning the employee array on every solve, allocating new Vec for each sort, building string-heavy output structs, and then running a determinism re-check (which doubles the work) inside the invariant checker.
We applied TigerStyle principles:
-
SolverArena: pre-allocated scratch space for all solver buffers. Allocated once, reused across every solve. The VOPR loop does zero heap allocation after init. -
solve_arena(): borrows&[Employee]instead of owningVec<Employee>. No clone at the boundary. -
Integer-only
check_invariants(): reads fromarena.adjustments_full[i]by index. No string matching. O(n) per check. - Determinism checked by replay, not re-execution. TigerBeetle's insight: determinism is a property of the code, not of individual operations.
Result: 38,469 ops/sec. A 146x speedup. An overnight 8-hour run executes approximately 1.1 billion invariant-checked operations.
The Multiverse
The VOPR tests random operations within one company. But the algorithm must work for any company. So we built 20 universes:
| Universe | Employees | What it tests |
|---|---|---|
| Sole Proprietor | 1 | N=1 edge case |
| Garage Startup | 5 | Fairness at intimate scale |
| Mega Corp | 100,000 | Integer overflow at scale |
| All Executives | 500 | Flat hierarchy, stingy limits |
| Engineering Strike | 1,500 | Entire department walks out |
| Geographic Pay Gap | 2,000 | Same role, 10x salary by location |
| Concentration Risk | 200 | One person earns 25% of payroll |
| Razor's Edge | 500 | Target barely feasible |
| Contractor Heavy | 1,500 | 60% can't participate |
| Pay Equity Stress | 1,200 | Systematic salary gap |
Each universe has its own salary distribution, threshold culture, participation pattern, and tier structure. The VOPR runs against all of them. Zero violations across all 20.
God Mode
The final test: one million employees calibrated to the actual economic structure of planet Earth.
We used ILO World Employment data, World Bank income distribution, and Milanovic's global inequality research. The salary distribution follows a log-normal body with a Pareto tail (alpha 1.7). Eight global regions weighted by workforce share. Salaries from $150/year (Burundi) to $5.8 million/year. A 30,000:1 dynamic range.
At this scale, every subtle bug becomes loud:
- A 1-cent rounding error per employee is $10,000 of phantom money
- The u128 cross-multiplication in the clamping comparison handles values up to 10^38
- The sort processes one million 32-byte structs in about 35ms
- The Gini coefficient computation must handle a distribution vastly more unequal than any single company
All invariants held. The $150/year Burundian worker and the $5.8M/year executive both got adjustments within their declared thresholds. Not one dollar of phantom money.
The Arithmetic Bug That Almost Shipped
This is the part where I'm supposed to say everything worked perfectly. It didn't.
When porting the TypeScript solver to Rust integer arithmetic, we got the unit scaling wrong in the clamping loop. The formula should have been:
adj_bps = remaining_cents × 10000 × weight / active_wp
But we wrote:
adj_bps = remaining_cents × weight × 10000 / (active_wp × 1_000_000)
That extra 1_000_000 in the denominator. The adjustment calculation was dividing by a million too much. Every employee got an adjustment of zero. The "baseline sanity" test caught it immediately: total savings was zero, which is not what you want from a cost-cutting algorithm.
The fix was one line. But the fact that the test caught it in under a second, before the code ever ran against real data, is the entire point of this kind of testing. Financial bugs don't announce themselves. They hide in rounding, in edge cases, in the difference between what you meant to compute and what you actually computed. You find them with proofs, not with demos.
Run It Yourself
We compiled the Rempart engine to WebAssembly and built a visualization at bench.seuil.dev.
It's not a demo with mock data. The production Rust engine, compiled to 379KB of WASM, runs real constrained optimization in your browser. You can click any of the 36 tests and watch the Rempart engine solve, verify, and prove its guarantees in real time.
The signal field visualization (adapted from a previous project) shows the solver's internal stages as a node graph. Tier 1 nodes are solver stages: filter, feasibility, weight computation, iterative clamping. Tier 2 nodes are verification: consent check, drift check, determinism. The verdict node at the bottom lights up green only when all invariants hold. Particles flow between nodes as the computation proceeds.
Each test tells a three-act story:
- The Situation. What's at stake. In human terms.
- The Challenge. What goes wrong. The chaos.
- The Proof. Did the engine protect everyone?
The Numbers
| Metric | Value |
|---|---|
| Solver language | Rust, integer arithmetic |
| Arithmetic | i64 cents, u16 basis points, u128 comparisons |
| Test suites | 36 (14 adversarial + 20 multiverse + 1 VOPR + 1 God Mode) |
| VOPR throughput | 38,469 ops/sec with invariant checking |
| Overnight capacity | ~1.1 billion checked operations in 8 hours |
| Max employees tested | 1,000,000 (planetary economics) |
| Salary dynamic range | 30,000:1 ($150/yr to $5.8M/yr) |
| Consent violations | 0 |
| Phantom money | 0 |
| Non-deterministic results | 0 |
| WASM bundle size | 379KB (108KB gzipped) |
What This Is Really About
There's a tendency in software to ship fast and fix later. Move fast and break things. The problem is, some things shouldn't break. A salary adjustment algorithm that tells a junior employee "we're taking 12% of your pay" when they consented to 10% is not a bug to fix in the next sprint. It's a betrayal of trust that you can't unfix.
We didn't build this testing infrastructure because it was fun (though the VOPR is genuinely fun to watch). We built it because the alternative was asking people to trust software that we couldn't prove was correct.
We don't ship promises. We ship proofs.
Try it: bench.seuil.dev
The Seuil Continuity System and Rempart engine are open source. The Rust engine, TypeScript prototype, and WASM benchmark visualization are all available on GitHub.
Top comments (0)