If you work in building HVAC, you know the ritual. You need a fan that will push 5000 m³/h at 400 Pa. Your manufacturer rep emails you a 400 MB installer — usually Windows-only, often Delphi, almost always from 2007 — and you click through dialog boxes until the program picks a fan for you. The selection engine is a black box. You cannot embed it. You cannot query it. You cannot even link to a result.
The math underneath is not complicated. It has been public for decades. It is just locked behind a desktop binary.
I have been building a web-native alternative for a Russian HVAC marketplace. The selection engine now indexes 18,141 real manufacturer pressure–flow curves across 13 fan families, and picks a matching fan for a given duty point in about 4 ms on a single Postgres row read plus in-memory evaluation. This post walks through the storage model, the evaluation math, the matching loop, and the three gotchas that each cost me roughly a week.
Full working code: github.com/goncharovart/polynomial-fan-matcher. Everything below runs in production on wentmarket.ru.
The problem, stated precisely
A fan has a characteristic curve: for every volumetric flow rate Q (in m³/h) the fan is capable of producing, there is a corresponding static pressure P(Q) (in Pa) it can develop. As Q goes up, P goes down. The curve is smooth and roughly quadratic.
Given a target duty point — say Q_target = 5000 m³/h at P_target = 400 Pa, with a tolerance band of ±15% on pressure — I want to, against a catalog of ~18,000 curves, return the subset that:
- Physically covers the target flow (the curve is defined at
Q = 5000). - Produces a pressure at that flow inside
[340, 460] Pa. - Is ranked by efficiency
η(Q_target), because two fans that both "work" are not the same fan — the one consuming 2.2 kW instead of 4.0 kW pays for itself in 18 months.
Why a lookup table is the wrong answer
The naive move is to store each curve as a list of sampled (Q, P) points — say 100 evenly spaced samples between Q_min and Q_max. Matching then becomes "find the row, interpolate linearly between the two nearest samples."
For 18,141 curves at 100 samples each, that is 1.8 million rows. Postgres can chew through that, but you are paying three costs you did not have to pay:
- Storage is inflated. Each curve that actually fits in 5 floats is now 200 floats.
- Every read is an interpolation, which introduces piecewise-linear error between sample points.
-
Scaling the curve becomes awkward. Fans run at variable speeds via VFDs; affinity laws say
Q ∝ nandP ∝ n². Applied to a polynomial, that is a one-line coefficient transform. Applied to 100 samples, it is 100 multiplications plus a full re-sampling grid.
Storage: a tiny array of coefficients
A fan curve is well-approximated by a polynomial of modest degree:
P(Q) = a_0 + a_1·Q + a_2·Q² + a_3·Q³ + ... + a_n·Qⁿ
In practice, degree 3–5 is enough. Catalogs I imported use degree 6 at the high end. Each curve is stored as an array of 7 floats.
CREATE TABLE fan_curves (
id bigserial PRIMARY KEY,
fan_id bigint NOT NULL,
label text NOT NULL,
coeffs double precision[] NOT NULL,
q_min double precision NOT NULL,
q_max double precision NOT NULL,
eta_xpoints double precision[] NOT NULL,
eta_values double precision[] NOT NULL,
n_nominal double precision NOT NULL,
fan_type text NOT NULL
);
CREATE INDEX fan_curves_q_range_idx ON fan_curves (q_min, q_max);
Evaluation: Horner, not Math.pow
The wrong way:
// DON'T do this
let p = 0;
for (let i = 0; i < coeffs.length; i++) {
p += coeffs[i] * Math.pow(q, i);
}
Horner's method: same polynomial, computed right-to-left with one multiply and one add per coefficient. No pow:
export function evaluatePolynomial(coeffs: number[], q: number): number {
if (coeffs.length === 0 || !Number.isFinite(q)) return 0;
let result = 0;
for (let i = coeffs.length - 1; i >= 0; i--) {
result = result * q + coeffs[i];
if (!Number.isFinite(result)) return 0;
}
return result;
}
On an M2 laptop this evaluates degree-6 in ~30 ns. For 18,141 curves the full batch is ~540 µs. The rest of the 4 ms budget is Postgres I/O.
The matching loop
export function matchDuty(curves: Curve[], duty: Duty): Match[] {
const { qTarget, pTarget, tolerance } = duty;
const pMin = pTarget * (1 - tolerance);
const pMax = pTarget * (1 + tolerance);
const results: Match[] = [];
for (const c of curves) {
if (qTarget < c.qMin || qTarget > c.qMax) continue;
const p = evaluatePolynomial(c.coeffs, qTarget);
if (p < pMin || p > pMax) continue;
const eta = interpolateEta(c.etaXpoints, c.etaValues, qTarget);
results.push({ curve: c, pressureAtQ: p, deviation: (p - pTarget) / pTarget, eta });
}
results.sort((a, b) => (b.eta - a.eta) || (Math.abs(a.deviation) - Math.abs(b.deviation)));
return results;
}
Benchmark on production catalog:
| Stage | Mean time |
|---|---|
| Postgres range-filter + row transfer | 2.8 ms |
| Horner evaluation, 2,100 rows mean | 0.6 ms |
| η interpolation and sort | 0.5 ms |
| Total (warm cache) | 3.9 ms |
The three gotchas that each cost a week
1. Pressure and efficiency are separate functions
Assuming η(Q) can be derived from P(Q) via a shaft-power formula is the single most common mistake in naive fan-selector tutorials. It cannot. Pressure drops roughly quadratically; efficiency has a bell shape peaking around 70% of Q_max. Two curves with similar pressure at a duty point can have drastically different efficiencies there. Store η as its own thing — my data ships as vectors of sampled measurement points, linearly interpolated.
2. Polynomial degree varies per manufacturer
One catalog fits degree-3. Another fits degree-5. A third went with degree-6. Zero-pad every curve to the max degree so the hot loop has a fixed shape and the JS engine keeps the inner loop monomorphic:
const MAX_DEGREE = 6;
function normalizeCoeffs(coeffs: number[]): number[] {
const out = new Array(MAX_DEGREE + 1).fill(0);
for (let i = 0; i < coeffs.length && i <= MAX_DEGREE; i++) out[i] = coeffs[i];
return out;
}
Measured: ~30% faster on the full batch after zero-padding.
3. Extrapolation is silently wrong
A polynomial evaluated outside its fit domain does not return an error. It returns a physically meaningless number — often negative pressure or a value implying the fan produces more air at higher static pressure, which is not how fans work. The domain check is a correctness guarantee, not an optimization:
if (qTarget < c.qMin || qTarget > c.qMax) continue;
I skipped this originally. A user asked for 200 m³/h on a fan whose curve started at 800. Engine reported 1200 Pa. The fan would stall in reality.
Scaling across RPM — the affinity law bonus
export function scaleByRpm(coeffs: number[], nBase: number, nTarget: number): number[] {
const r = nTarget / nBase;
return coeffs.map((a, i) => a * Math.pow(r, 2 - i));
}
One pass over the array. The stored curve stays unmodified.
Benchmarks, cold and hot
| Scenario | p50 | p95 | p99 |
|---|---|---|---|
| Cold cache, first query | 38 ms | 62 ms | 94 ms |
| Warm cache | 4.2 ms | 6.1 ms | 9.8 ms |
| Warm + query cache hit | 0.9 ms | 1.4 ms | 2.3 ms |
What is in the repo, and what is next
The polynomial-fan-matcher repo extracts the evaluation and matching core from production. It ships:
-
evaluatePolynomial(coeffs, q)— Horner in 10 lines -
scaleByRpm(coeffs, nBase, nTarget)— affinity-law transform on coefficients -
matchDuty(curves, duty)— the matching loop from this post - Tests against manufacturer reference data
This OSS module is one subsystem extracted from wentmarket.ru — a vertical B2B HVAC portal I built over ~2 months. The full platform combines a marketplace (2K+ SKUs, B2B/B2C commerce with 1C ERP and Bitrix24 CRM integration), an engineer's toolkit (17 calculation engines: fan selection, VFD, LCC, acoustic silencers, AHU designer, duct losses, 3D BIM exports), and a Telegram ecosystem (27 bot commands + Mini App). I used Claude Code as a tool-accelerator for boilerplate, test generation, and first-pass bug hunting — architecture, stack choices, HVAC domain modeling, and production QA are mine. If you work on HVAC tooling in the West and want to talk about collaboration, licensing, or engineering work, I am reachable at goncharov.artur.02@gmail.com.
Summary
- Store fan curves as polynomial coefficient arrays in Postgres, not sampled lookup tables.
- Evaluate with Horner's method; 10 lines, no
Math.pow, stable floating-point. - Index on
(q_min, q_max)and domain-check every query. - Keep pressure and efficiency as independent curves.
- An extract of the production engine is open source at github.com/goncharovart/polynomial-fan-matcher; it matches 18k curves in ~4 ms.
Top comments (0)