Bots, scanners, and noisy automated traffic are common in Node.js apps.
Instead of trying to 100% detect humans, I built a small open-source package that does something simpler and more honest:
π It scores incoming HTTP requests (0β100) based on risk.
π¦ request-risk-score
A lightweight, privacy-first Node.js library that:
Scores HTTP requests using transparent heuristics
Avoids browser fingerprinting
Uses no external or paid APIs
Handles search engine crawlers safely
npm install request-risk-score
Usage
const { analyzeRequest } = require('request-risk-score');
const result = analyzeRequest({
ip: '10.0.0.5',
headers: { 'user-agent': 'curl/7.68.0' },
url: '/admin/login'
});
console.log(result);
Example output:
{
"score": 75,
"bucket": "likely_automated",
"signals": ["tool_user_agent", "sensitive_path", "no_cookies"]
}
π§ Why Risk Scoring?
Blocking is a decision you should make
The library only provides probability + explanation
Works well for small APIs and services without a WAF
π Links
npm: https://www.npmjs.com/package/request-risk-score
Blog link : https://tutohub.com/blogs/detect-suspicious-http-requests-nodejs-risk-scoring
Feedback welcome π
Top comments (0)