DEV Community

[CS] Alishopping
[CS] Alishopping

Posted on

Building a scoring engine with pure TypeScript functions (no ML, no backend)

We needed to score e-commerce products across multiple dimensions: quality, profitability, market conditions, and risk.

The constraints:

  • Scores must update in real time
  • Must run entirely in the browser (Chrome extension)
  • Must be explainable (not a black box)

We almost built an ML pipeline — training data, model serving, APIs, everything.

Then we asked a simple question:

Do we actually need machine learning for this?

The answer was no.

We ended up building several scoring engines in pure TypeScript.
Each one is a single function, under 100 lines, zero dependencies, and runs in under a millisecond.


What "pure function" means here

Each scoring engine follows 3 rules:

  1. No I/O → no network, no DB, no files
  2. Deterministic → same input = same output
  3. No side effects → no global state, no mutations

This makes them:

  • Easy to test
  • Easy to reason about
  • Portable (browser, Node.js, anywhere)

Core pattern: weighted scoring

interface ScoringInput {
  qualityScore: number | null;
  profitScore: number | null;
  marketScore: number | null;
  riskScore: number | null;
}

type Verdict = 'strong_buy' | 'buy' | 'hold' | 'pass';

function computeScore(input: ScoringInput) {
  const quality = input.qualityScore ?? 50;
  const profit = input.profitScore ?? 50;
  const market = input.marketScore ?? 50;
  const risk = input.riskScore ?? 50;

  const overall = Math.round(
    quality * 0.3 +
    profit * 0.3 +
    market * 0.2 +
    risk * 0.2
  );

  let verdict: Verdict;
  if (overall >= 80) verdict = 'strong_buy';
  else if (overall >= 60) verdict = 'buy';
  else if (overall >= 40) verdict = 'hold';
  else verdict = 'pass';

  return { overall, verdict };
}
Enter fullscreen mode Exit fullscreen mode

Handling missing data (critical)

All inputs are nullable.

We default to 50 (neutral).

Why not:

  • Skip missing values → breaks comparability
  • Default 0 → unfairly penalizes
  • Default 100 → artificially inflates

Neutral = safest assumption.


Normalization + clamp

All scores must be 0–100.

function clamp(value: number, min: number, max: number) {
  return Math.max(min, Math.min(max, value));
}

const profitScore = clamp(marginPercent * 2, 0, 100);
const marketScore = clamp(100 - saturationPercent, 0, 100);
const riskScore = clamp(100 - rawRiskScore, 0, 100);
Enter fullscreen mode Exit fullscreen mode

Without clamp:

  • values can exceed bounds
  • negative values break logic
  • NaN propagates silently

Choosing weights

Not all dimensions are equal.

We weighted:

  • Quality + Profit → higher (controllable)
  • Market + Risk → lower (external factors)

We considered user-configurable weights but dropped it:
→ too complex for non-technical users


Threshold calibration

Initial thresholds (75 / 50 / 25) were too optimistic.

We:

  1. Scored hundreds of products
  2. Compared with human judgment
  3. Iterated

Lesson:
Never guess thresholds — calibrate them.


Composition > monolith

We built multiple small engines:

  • Product score
  • Market score
  • Platform score

Then combine:

function computeFinalVerdict(
  productScore: number | null,
  marketScore: number | null,
  platformScore: number | null
) {
  const product = productScore ?? 50;
  const market = marketScore ?? 50;
  const platform = platformScore ?? 50;

  const score = Math.round(
    market * 0.4 + product * 0.35 + platform * 0.25
  );

  const confidence = Math.round(
    Math.min(product, market, platform) * 0.8 + 20
  );

  const reasons: string[] = [];

  if (market >= 70) reasons.push('Favorable market conditions');
  if (market < 40) reasons.push('Challenging market');
  if (product >= 70) reasons.push('Strong product');
  if (product < 40) reasons.push('Weak product');

  return { score, confidence, reasons };
}
Enter fullscreen mode Exit fullscreen mode

Key ideas:

  • Confidence = weakest dimension
  • Reasons = explainability

Example

Input:

  • Quality: 75
  • Profit: 84
  • Market: 65
  • Risk: 80

Result:

  • Score: 77
  • Verdict: buy

If profit increases → score crosses 80 → strong_buy

This kind of reasoning is trivial with pure functions, impossible with black-box ML.


When you SHOULD use ML

Use ML if:

  • You analyze images or text
  • You need pattern discovery
  • You have high-dimensional data (50+ features)

Otherwise:
→ pure functions are simpler, faster, more transparent


Key takeaways

  • Start with pure functions
  • Default missing data to neutral
  • Always clamp values
  • Weight by controllability
  • Compose small engines
  • Calibrate with real data

No training data. No APIs. No latency.
Runs in-browser in under 1ms.

Not for every problem — but for structured scoring, it’s hard to beat.


Curious:
Have you used similar scoring patterns?
Or did you go with ML instead?


Try it free:
https://chromewebstore.google.com/detail/alishopping-%E2%80%94-aliexpress/agiaehdeifaihlndhnhcmopjpjijnfap?utm_source=devto

Top comments (0)