The bot problem
We run APIs. Nothing special, just services that people pay for.
One day we noticed weird traffic patterns. Thousands of requests from the same endpoints, perfect timing, no mouse movements before the clicks. Bots.
They weren't doing anything malicious per se. Just scraping data and abusing free tiers. But they were costing us money and slowing things down for real users.
What we tried first
Rate limiting by IP - Useless. Residential proxies are cheap. They just rotate IPs.
CAPTCHAs - Users hated it. Conversion dropped. And there are CAPTCHA-solving services anyway.
User agent checks - Trivial to fake.
Honeypot fields - Caught maybe 10% of bots.
None of this worked well enough.
The real question
How do you tell a human from a bot?
Humans are chaotic. They move their mouse in curves. They pause to read. They scroll at irregular intervals. They make typos.
Bots are mechanical. Perfect timing. Straight-line mouse movements (if any). No reading time. Predictable patterns.
What we built
ATTEST. An API protection layer that analyzes request authenticity.
It looks at:
- Browser fingerprint consistency
- Request timing patterns
- Environment signals (headless browser detection)
- Behavioral patterns from the frontend
Each request gets a score. You set the threshold for what gets through.
Implementation
Server-side, you verify the attestation token:
const result = await attest.verify(req.headers['x-attest-token']);
if (!result.valid || result.score < 70) {
return res.status(403).json({ error: 'Verification failed' });
}
The frontend SDK collects signals and generates tokens. The backend verifies them.
What it catches
- Headless browsers (Puppeteer, Playwright)
- Basic HTTP clients (curl, Python requests)
- Most commercial scraping tools
- Automated form submissions
What it doesn't catch
Sophisticated attackers who instrument real browsers, solve challenges manually, and mimic human behavior perfectly. But those are rare and expensive to operate at scale.
The trade-off
False positives happen. Legitimate users with unusual setups (Tor, heavy privacy extensions, very old browsers) might get flagged. You tune the threshold based on your tolerance.
We run ours at 65. Blocks most bots, rarely affects real users.
Details
sekyuriti.build/modules/attest
Free tier available for testing.
Top comments (0)