I recently faced a problem on my SaaS (DocBeacon, for document sharing & tracking): a few new accounts uploaded documents with malicious QR codes that redirected to phishing sites.
The scale wasn’t huge (700–800 visits over 2 days), but it was serious enough to highlight how easily abuse can happen.
What I’ve already implemented:
Human verification at signup (Cloudflare Turnstile)
Email verification
A “Report Abuse” button on every shared page
But here’s where I’d love advice from the community:
For indie devs, what lightweight safeguards actually work?
Do you recommend publishing an “Abuse Policy” page even with a small user base?
How do you balance preventing abuse with keeping onboarding smooth? - I had to lower the visit limit for each share to prevent a harmful share that can be easily created from spreading widely.
Abuse is inevitable for any platform, but I think indie founders can learn a lot from each other.
Top comments (0)