DEV Community

Cover image for Why Your WordPress Site Is Slow (And Google Analytics Has No Idea)
SysWP
SysWP

Posted on

Why Your WordPress Site Is Slow (And Google Analytics Has No Idea)

Your client swears the site is slow. You check Google Analytics — traffic looks normal. PageSpeed scores are decent. The server isn't maxed out. Nothing obvious.

Sound familiar?

We just diagnosed exactly this scenario on a real production site. The culprit wasn't a bloated plugin, a bad hosting plan, or unoptimized images. It was invisible traffic that no standard analytics tool was catching — and it had been hammering the server for months.
Here's what we found, how we found it, and what it means for every WordPress site you manage.

The complaint
The client had been reporting slow load times for months. Intermittent. Hard to reproduce. "Sometimes it's fine, sometimes it just crawls."
Classic symptoms of a resource contention problem — something is periodically consuming server capacity, leaving legitimate users waiting. But every standard diagnostic pointed to nothing.
The issue: every diagnostic tool they'd tried only looks at browser-based traffic. Google Analytics, Plausible, Fathom — they all work by running a JavaScript snippet in the visitor's browser. No browser, no data. And as it turns out, a significant portion of what was hitting their server had no browser at all.


What server-level analysis revealed
When we analyzed the raw server traffic using SysWP Radar — which captures every request at the server level, before JavaScript ever runs — the picture changed completely.
The "unknown" traffic bucket alone contained ~68,000 requests in the analysis window. That's traffic that was completely invisible to their existing analytics. Let's break down what was inside it:

The main offender: Go-http-client/1.1 — 67,323 hits
This single user agent accounted for 99% of all unknown traffic.
Go-http-client/1.1 is the default HTTP client from the Go programming language. It's not a browser, not a legitimate crawler — it's a library. At 67,000+ requests, this is not accidental. Someone has built an automated scraper in Go and is systematically crawling the site at scale.
What it does to performance: Each of those 67,000 requests consumed a PHP worker, queried the database, and generated a full WordPress page render. The site's hosting plan had a fixed number of concurrent PHP workers. When the scraper was active, it was competing directly with real visitors for those workers — causing exactly the intermittent slowness the client was experiencing.
Risk level: High. This kind of systematic scraping can:

  • Exhaust PHP worker pools, causing 503 errors for real users
  • Inflate hosting costs if the plan bills by resource usage
  • Steal content for AI training, competitor analysis, or outright republishing
  • Degrade Core Web Vitals scores measured from real user data

Secondary attackers found in the same bucket
The remaining ~750 requests revealed a full ecosystem of malicious and suspicious activity:
axios/1.15.0 — 308 hits
Axios is a Node.js HTTP library. Used here as a scraper. Not a browser, not a search engine — a script making automated requests.
Spoofed browser user agents — ~370 hits combined
Several request clusters were using user agent strings that looked like real browsers but contained telltale inconsistencies:

  • Mozilla/5.0 (Mac/Win)... AppleWebKit/605... Safari hitting /wp-json/wp/v2/ — a legitimate-looking Safari UA being used to enumerate WordPress REST API endpoints. The endpoint /wp-json/wp/v2/users is specifically targeted to harvest usernames for brute-force attacks.
  • Mozilla/5.0 AppleWebKit/537.36 with no platform identifier — malformed, a dead giveaway for a spoofed UA.
  • Mozilla/5.0 (Windows NT 10) AppleWebKit/605 Chrome/X — the mismatch is the tell. Real Chrome uses WebKit 537. AppleWebKit/605 belongs to Safari. A bot copy-pasted a UA string and got it wrong.

getwp/1.0 (WP tespit) — 2 hits
"Tespit" is Turkish for "detection." This is a WordPress fingerprinting tool — it's specifically designed to identify WordPress installations and their configurations. Two hits isn't a lot, but it indicates a reconnaissance scan.
HTTP libraries at scale: curl, wget, python-requests, Apache-HttpClient, Java — multiple hits
Automated tooling probing the site. Some may be benign (uptime monitors, oEmbed fetchers), but in this context they're contributing to aggregate resource consumption.

The math: what this costs in performance
Let's be concrete about the server impact.
Assume a modest managed WordPress hosting plan with 4 concurrent PHP workers — typical for entry-level plans on WP Engine, Kinsta, Cloudways, or similar platforms.
During a scraping burst from Go-http-client at scale:

  • The scraper sends requests faster than WordPress can respond
  • Each request holds a PHP worker for the duration of the page render
  • With 4 workers occupied by bot traffic, legitimate visitor requests queue or time out
  • The user sees a slow site or a 503 error
  • Google Analytics records nothing — the real visitor may have bounced before the JS loaded

The scraper never shows up in your analytics. The slowness shows up as "intermittent performance issues" with no obvious cause. The client reports it, you investigate, find nothing in the standard tools, and the cycle repeats.

Why Google Analytics can't see this
This is the core issue, and it's architectural.
Google Analytics (and most analytics platforms) work by injecting a JavaScript snippet into your pages. That snippet runs in the visitor's browser, collects data, and sends it to GA's servers. This means:

  • Bots with no JavaScript engine are completely invisible
  • Requests that never render a full page (hitting REST API endpoints, XML-RPC, login pages directly) produce no GA event
  • Requests blocked before the page loads generate no data at all
  • Crawlers, scrapers, HTTP libraries — none of them run JavaScript

In this case, the Go scraper was making 67,000 requests and every single one was invisible to GA. The client's analytics showed normal, healthy traffic. The server told a completely different story.

What you should look for on your own sites
If your clients are experiencing intermittent slowness with no obvious cause, check for these patterns at the server level:
High-volume single user agents in raw logs
Any single non-browser UA with thousands of hits is a red flag. Legitimate crawlers (Googlebot, Bingbot) are well-behaved and won't hit a small site thousands of times in a short window.
Requests to WordPress-specific endpoints

  • /wp-json/wp/v2/users — user enumeration
  • /wp-login.php — brute force
  • /.env, /wp-config.php.bak, /.git/config — credential harvesting
  • xmlrpc.php — DDoS amplification vector

User agent mismatches WebKit version numbers don't match the stated browser. Platform identifiers missing or inconsistent. Truncated UA strings.
*HTTP library user agents at volume * curl, wget, python-requests, axios, Go-http-client, Apache-HttpClient, java/ — these are not browsers. A handful of hits may be legitimate. Hundreds or thousands is automated activity.

Requests from known scanner tools
Any UA containing "scan", "test", "probe", "check", "spider", "bot" that isn't a verified crawler should be investigated.

The fix
Once the traffic pattern was identified, the remediation was straightforward:

  1. Block Go-http-client at the firewall level — this single rule eliminated 99% of the problematic traffic
  2. Block /wp-json/wp/v2/users at the application level to stop user enumeration
  3. Rate-limit unknown user agents — requests from unrecognized UAs get a strict per-IP rate limit
  4. Add the offending IPs to the block list — and share them with the collective intelligence network so other sites benefit automatically

After implementing these changes, the intermittent slowness the client had reported for months disappeared.

The broader lesson
Bot traffic doesn't have to actively attack your site to damage it. Passive scraping at scale consumes the same server resources as real visitors. The damage is indirect — slower response times, occasional 503s, inflated hosting costs — and because standard analytics tools are completely blind to it, it can persist undetected for months or years.
Every WordPress site has some level of non-human traffic. The question is whether you're measuring it.

How SysWP Radar and Shield address this

SysWP Radar performs server-side traffic analysis that captures every request regardless of whether it comes from a browser. It classifies all traffic into nine categories — human visitors, verified bots (Googlebot, Bingbot), AI crawlers, SEO crawlers, RSS readers, WordPress internal requests, health checks, attackers, and unknown — giving you visibility into the full traffic picture, not just the browser-based slice.
In this case, Radar identified the Go-http-client scraper, the REST API enumeration attempts, and the spoofed UA patterns within minutes of installation.

SysWP Shield uses that intelligence to block. When Radar identifies a threat pattern, Shield can block it with one click. More importantly, Shield's collective intelligence network means that when one site identifies a malicious IP or behavior pattern, that block propagates automatically to every other site in the network — so your clients benefit from the combined threat intelligence of the entire user base.
If you're managing WordPress sites professionally and your clients are reporting intermittent performance issues, this is worth investigating. The traffic your analytics isn't showing you may be exactly the problem.

from: https://syswp.pro/why-your-wordpress-site-is-slow-and-ga-has-no-idea-but-radar-sees-it/

Top comments (0)