When you deploy a server on the public internet, something interesting happens almost immediately:
Bots start knocking.
Curious about what actually hits a public server, I enabled detailed logging and collected 1 million HTTP requests over several weeks. The goal was simple:
- Understand how much malicious traffic reaches a normal server
- Analyze patterns in web attack logs
- See what automated attackers actually do
The results were eye-opening.
The Setup
The experiment was simple.
A small cloud server running:
Nginx
Simple web application
Access logging enabled
All incoming requests were logged with:
IP address
timestamp
request path
HTTP status
user agent
Example log entry:
192.168.1.10 - - [10/Feb/2026:14:22:31] "GET /admin HTTP/1.1" 404
After collecting 1,000,000 requests, the logs were analyzed to identify patterns.
The First Surprise: Most Traffic Was Not Human
Out of the 1 million requests:
| Type | Approximate Share |
|---|---|
| Legitimate user traffic | ~8% |
| Bots / crawlers | ~72% |
| Suspicious scans | ~15% |
| Clear malicious attempts | ~5% |
That means over 90% of traffic was automated.
Most servers on the internet are constantly probed by bots looking for something vulnerable.
What Malicious Traffic Looked Like
The malicious traffic fell into several common categories.
1. Admin Panel Probing
Attackers constantly look for common admin paths.
Typical requests included:
/admin
/wp-admin
/login
/dashboard
/manager/html
/phpmyadmin
Example log:
GET /phpmyadmin HTTP/1.1
These requests come from automated scanners searching for misconfigured management interfaces.
2. WordPress Attack Scanners
Even though the server wasn't running WordPress, many bots assumed it was.
Requests included:
/wp-login.php
/xmlrpc.php
/wp-admin/admin-ajax.php
Example:
POST /xmlrpc.php HTTP/1.1
These are usually part of automated botnets targeting WordPress sites.
3. Vulnerability Scanning
Some requests were clearly trying to identify known vulnerabilities.
Examples:
/.env
/.git/config
/backup.zip
/server-status
/config.json
Example log entry:
GET /.env HTTP/1.1
Attackers hope to find exposed configuration files containing secrets.
4. SQL Injection Attempts
Many requests contained suspicious query parameters.
Example:
GET /product?id=1' OR '1'='1
Or encoded payloads:
?id=1%27%20UNION%20SELECT%20password
These are classic SQL injection probes used by automated scanners.
5. Path Traversal Attempts
Another common pattern involved attempts to access sensitive system files.
Example:
GET /../../../../etc/passwd
Or encoded variants:
GET /..%2f..%2f..%2fetc%2fpasswd
These attacks attempt to read files outside the web root.
Attack Timing Patterns
The logs also showed interesting timing behavior.
Many malicious scanners behaved like this:
Scan hundreds of paths
Wait several seconds
Scan another set of endpoints
Move to the next server
Some IP addresses made thousands of requests in minutes, which is typical for automated scanning tools.
The Long Tail of Random Probes
Some requests looked almost random:
/vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php
/cgi-bin/test.cgi
/.aws/credentials
These come from scanners trying huge lists of known vulnerable paths.
Even if a vulnerability only affects a tiny fraction of servers, scanning the whole internet makes it worthwhile.
Why Logging Matters
Without logging, these attacks are invisible.
Access logs help you:
see attack patterns
detect suspicious traffic
identify scanning bots
investigate incidents
Many developers underestimate how much malicious traffic actually hits their applications.
In reality, it starts within minutes of exposing a server.
The Reality of the Public Internet
The internet is continuously scanned by:
botnets
vulnerability scanners
security researchers
automated exploit tools
Most attacks are not targeted — they are opportunistic.
Attackers simply scan millions of servers and exploit whichever ones are vulnerable.
Protecting Your Server
Several layers of protection can significantly reduce risk.
Recommended defenses:
close unused ports
restrict admin interfaces
apply rate limiting
monitor access logs
deploy web application firewall protection
Even basic protections can stop a large portion of automated attacks.
Final Thoughts
After analyzing 1 million requests, the takeaway was clear:
The internet is extremely noisy.
A typical server will constantly receive:
port scans
admin probes
vulnerability scans
automated exploits
Most of this traffic is generated by bots looking for easy targets.
Understanding your web attack logs is the first step toward improving security.
If you want an additional protection layer, tools like SafeLine WAF can automatically analyze request patterns, detect malicious traffic, and block many of these attacks before they reach your application.
Live Demo: https://demo.waf.chaitin.com:9443/statistics
Discord: https://discord.gg/dy3JT7dkmY
Github: https://github.com/chaitin/SafeLine
Top comments (0)