DEV Community

Cover image for I built a CLI that tells you if your release is safe to deploy — in 2 seconds
Younes Ben Tlili
Younes Ben Tlili

Posted on

I built a CLI that tells you if your release is safe to deploy — in 2 seconds

Every team has the same moment before deploying:

"Tests passed... coverage looks okay... should we ship it?"

Someone eyeballs the numbers, checks if anything looks off, and says "yeah, let's go." No real process. No history. No way to catch a slow regression creeping in over weeks.
I got tired of that. So I built

qualityhub analyze
Enter fullscreen mode Exit fullscreen mode

What it does
You run your tests. Then:

npx qualityhub-cli parse jest ./coverage
npx qualityhub-cli analyze
Enter fullscreen mode Exit fullscreen mode

You get this:

🔍 QualityHub Analysis
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
   Project:  my-app@2.3.1
   Branch:   main

   📊 Tests
      ❌ 243/250 passed (97.2%)
      ❌ 4 failed
      ⏭️  3 skipped
      ⏱️  Duration: 12.7s

   📈 Coverage
      Lines:      █████████████████░░░ 87.3%  ▼ -3.2%
      Branches:   ████████████████░░░░ 82.2%
      Functions:  ██████████████████░░ 91.3%

   🚨 Issues Detected
      🟡 4 tests failed (97.2% pass rate)
      🎲 2 flaky tests detected
      📉 Coverage dropped 3.2% since last run
      🆕 2 new test failures since last run
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
   🎯 Risk Score:  ⚠️ 72/100 (MEDIUM RISK)
   📋 Decision:    ⚠️  CAUTION — Review issues before deploying
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Enter fullscreen mode Exit fullscreen mode

A risk score from 0 to 100. A clear decision: PROCEED, CAUTION, or BLOCK. In 2 seconds.

The problem I was solving
At my previous job, we had 20 projects across 5 teams. Quality tracking was done in Excel files. Every sprint, someone would manually collect metrics from each team — test pass rates, coverage percentages, SonarQube results — paste them into a spreadsheet, and email it around.
Nobody caught the slow regressions. Coverage would drop 0.5% per sprint, and after 6 months you'd realize you went from 85% to 70% without anyone noticing.
I wanted something that:

  • Runs in CI, not in someone's inbox
  • Compares every run with the previous one automatically
  • Catches what humans miss: slow drifts, new failures, build slowdowns
  • Gives a clear answer, not a dashboard you have to interpret

How the scoring works
The risk score starts at 100 and gets penalized:
Test results (40 points max)
Every failed test costs points. 2% failure rate? Small penalty. 10% failure rate? You're blocked.
Coverage (30 points max)
Below 80% line coverage? Penalty. Below 50%? Critical.
Regressions (15 points max)
This is where it gets interesting. The CLI stores history locally in .qualityhub/history.json. On every run, it compares with the last one:

  • Coverage dropped 3%? Flagged.
  • 2 new test failures? Flagged.
  • Build time increased 20%? Flagged.

Issue severity (15 points max)
Critical issues (security vulnerabilities, quality gate failures) cost more than warnings.
The final score maps to a decision:

  • 85-100: PROCEED ✅
  • 65-84: CAUTION ⚠️
  • 40-64: HIGH RISK 🔶
  • 0-39: BLOCK 🛑

No server needed
This is the part I'm most proud of. Everything runs locally:

  • History is stored in .qualityhub/history.json
  • No account to create
  • No SaaS to connect
  • No API keys

Install and run:

npm install -g qualityhub-cli
qualityhub parse jest ./coverage
qualityhub analyze
Enter fullscreen mode Exit fullscreen mode

That's it. Value in 30 seconds.

CI/CD integration
The CLI exits with code 1 when the decision is BLOCK. So you can use it as a quality gate:

# .gitlab-ci.yml
quality-check:
  stage: test
  script:
    - npm test -- --coverage
    - npx qualityhub-cli parse jest ./coverage
    - npx qualityhub-cli analyze
  cache:
    paths:
      - .qualityhub/
Enter fullscreen mode Exit fullscreen mode

The cache persists the history between pipeline runs, so regression detection works across commits.

Markdown reports for merge requests
You can also generate a Markdown report to post on your MR:

qualityhub analyze --format markdown --output report.md
Enter fullscreen mode Exit fullscreen mode

What's next
I'm working on:

  • Auto-commenting on MRs — qualityhub analyze --comment posts directly on your GitLab MR or GitHub PR
  • More parsers — pytest, XCTest, Go test
  • AI-powered insights — Using LLMs to explain why your quality is changing and what to fix first

Try it

npm install -g qualityhub-cli

# Or test with example data:
git clone https://github.com/ybentlili/qualityhub-cli
cd qualityhub-cli
npm install && npm run build && npm link
qualityhub parse jest ./examples/jest
qualityhub analyze
Enter fullscreen mode Exit fullscreen mode

GitHub: ybentlili/qualityhub-cli
npm: qualityhub-cli
It's MIT licensed. Stars, feedback, and contributions are welcome.
If you've ever looked at a CI pipeline and thought "I think we can ship this" — this is for you.

Top comments (2)

Collapse
 
joxm profile image
José Antonio

I love scoring projects!^^ Thank you very much!

Collapse
 
younes_bentlili_9480340f profile image
Younes Ben Tlili

Thanks...
don't hesitate to give star or contribute in the project