Fail2ban is useful. I run it on every VPS.
On internet-exposed systems, brute-force SSH traffic never really stops.
If your security plan is only “install fail2ban,” your server is still exposed.
The core issue: fail2ban is reactive. It reads logs and bans sources after bad activity happens. That reduces noise, but it does not reduce your attack surface.
What fail2ban does well
For SSH, fail2ban is good at:
- detecting repeated failed authentication attempts
- banning obvious brute-force sources
- reducing background bot noise
That is real value. Keep it.
Where fail2ban alone breaks
This is where the operational gap appears.
1) It reacts after the hit
Attackers still reach the service first. The ban happens later.
2) It only protects what you configured
No jail, no protection.
3) It does not hide your real target
If real SSH is public, scanners will keep hitting it indefinitely.
4) Low-and-slow traffic evades thresholds
Attackers rotate IPs and stay below ban limits.
5) “Installed” ≠ “effective”
Common weak setups include:
- default jails only
- short ban windows
- no escalation for repeat offenders
- no alert feedback loop
What attacker flow usually looks like
On exposed SSH, activity typically follows a predictable pattern:
- credential spray (root/password, common combos)
- probe command (echo "ok" style validation)
- host fingerprinting (uname, cpuinfo, meminfo)
- persistence attempt (authorized_keys edits, flags)
- malware or script download attempt
Fail2ban mainly reduces step-one noise. It does not address the full chain.
What to do instead: layered baseline
Use fail2ban as one layer, not the strategy.
Layer A — Put a decoy on port 22
Run Cowrie so scanners interact with fake SSH instead of the real service.
Layer B — Hide real SSH
Move real sshd off the public interface (loopback-only) and access it through secure ingress such as a Cloudflare Tunnel.
Layer C — Default-deny firewall
Expose only services that must be reachable.
Layer D — Volume-based auto-ban
Quickly block high-volume sources detected via honeypot telemetry.
Layer E — Fix alert quality
One clean operations channel. Noisy alerts get ignored.
Layer F — Handle secrets properly
Avoid long-lived plaintext secrets on disk.
Minimal checklist for small teams
If you run a public VPS, this is a practical baseline:
- fail2ban tuned and verified
- firewall set to default deny
- real SSH not publicly exposed
- honeypot or equivalent SSH telemetry
- meaningful alerts for spikes and bans
- weekly log review and threshold tuning
One-Day Hardening Plan
In a single focused session:
- tune fail2ban jails and verify bans
- move real SSH off the public interface and enforce key-only auth
- add honeypot (or structured SSH telemetry) and alerts
- implement auto-ban for high-volume sources
- review logs and tighten thresholds
Practical takeaway
Keep fail2ban.
Just don’t treat it as the whole strategy.
On public infrastructure, security comes from layers: less exposure, better telemetry, faster response. Teams operating with smaller attack surfaces and clear visibility make better decisions when incidents happen.
Top comments (0)