Every developer has wondered at some point: what is this site built with? Maybe you’re doing competitive research, reverse-engineering a stack decision, or just curious how a particularly fast site achieves its performance.
Tools like Wappalyzer answer this question instantly. But how do they actually know? The answer is more interesting than you’d expect and involves four distinct layers of analysis working simultaneously.
Layer 1: HTML Source Analysis
The most obvious place to start is the raw HTML. Websites leave fingerprints everywhere in their markup, and most of them aren’t trying to hide.
Generator meta tags are the easiest signal. WordPress sites often include:
Webflow embeds its own attributes throughout the DOM:
CSS class patterns reveal frameworks. A site with hundreds of classes prefixed with wp- is almost certainly WordPress. Classes like w-nav, w-container, or wf- point to Webflow. Tailwind’s utility classes (flex, items-center, px-4) are identifiable by their sheer volume and naming convention.
Script src attributes are particularly revealing. A script loading from cdn.shopify.com tells you everything you need to know. Similarly, /wp-content/plugins/ in any script path confirms WordPress with near certainty.
The challenge with HTML analysis is noise. Modern sites load dozens of scripts, and many share superficially similar patterns. A naive detector that fires on any jQuery reference would produce false positives constantly, jQuery is everywhere.
Layer 2: HTTP Response Headers
While HTML lives in the body, the HTTP response headers arrive first and often contain valuable server-side signals.
X-Powered-By is the most direct. Servers and frameworks sometimes broadcast themselves:
X-Powered-By: PHP/8.1.0
X-Powered-By: Express
X-Powered-By: Next.js
Session cookies reveal backend frameworks with surprising accuracy. PHPSESSID means PHP. JSESSIONID means Java. laravel_session means Laravel. _session_id with a specific format can indicate Ruby on Rails.
These aren’t definitive alone, but combined with other signals they’re strong evidence.
Server headers tell you about the web server layer:
Server: nginx/1.24.0
Server: Apache/2.4.57
Server: cloudflare
Cloudflare’s presence in the Server header combined with a CF-Ray header is a reliable CDN detection signal. The absence of a meaningful Server header often indicates a CDN or proxy is stripping it, itself a signal worth noting.
X-Robots-Tag and cache headers can also hint at infrastructure. Vercel deployments often include x-vercel-cache. Netlify adds x-nf-request-id. These deployment-specific headers are some of the most reliable signals in the entire detection process.
Layer 3: JavaScript Assets and Globals
JavaScript is where modern applications reveal the most about themselves, but also where detection gets genuinely hard.
Asset URL patterns are a primary signal. A site loading scripts from cdn.jsdelivr.net/npm/react@18/ is obviously running React. URLs containing /wp-includes/js/ confirm WordPress. Shopify’s CDN (cdn.shopify.com) is unmistakable.
Window globals are often more reliable than asset URLs. JavaScript frameworks and libraries typically attach themselves to the global window object:
•window.React or window.REACT_DEVTOOLS_GLOBAL_HOOK — React
•window.Webflow — Webflow
•window.Shopify — Shopify
•window.NEXT_DATA — Next.js (this one also appears as an inline script tag in the HTML)
• window.gtag —Google Analytics 4
The NEXT_DATA signal deserves special mention. Next.js embeds its routing and props data in a script tag with id="NEXT_DATA" on every server-rendered page. It’s one of the most reliable framework detection signals that exists.
Inline script content is worth scanning too, though selectively. Scanning the first 500 characters of each inline script (rather than the entire content) gives you enough signal to catch initialisation patterns without the performance cost of processing megabytes of minified code.
Layer 4: DNS Lookups
HTTP analysis tells you what the site presents to the browser. DNS tells you about the infrastructure underneath.
CNAME records are the most useful. A CNAME pointing to *.vercel.app confirms Vercel hosting, even if the site uses a custom domain with no Vercel branding anywhere in the HTML or headers.
Similarly:
•.netlify.app → Netlify
•.render.com → Render
•*.github.io → GitHub Pages
• *.elasticbeanstalk.com → AWS Elastic Beanstalk
• *.pages.dev → Cloudflare Pages
NS records reveal DNS providers. A site using ns1.cloudflare.com and ns2.cloudflare.com as nameservers is on Cloudflare’s DNS, though this alone doesn’t confirm they’re using Cloudflare as a CDN/proxy, just DNS management.
DNS lookups add latency to detection, but they surface hosting information that’s invisible to pure HTTP analysis. A site can perfectly mask its hosting in every header and HTML attribute, but it can’t easily hide its CNAME records.
The Confidence Problem
Here’s where detection gets genuinely difficult: any individual signal can be wrong.
A site might load a WordPress plugin via CDN without running WordPress itself. A developer might have left a PHPSESSID cookie from a legacy system on a site that now runs entirely on Node. Cloudflare’s presence in DNS doesn’t mean Cloudflare is proxying traffic.
Good detection systems use multi-signal confidence scoring. A single weak signal (one asset URL) gets low confidence.
Multiple corroborating signals from different layers get high confidence. The system only reports a technology when it’s accumulated enough evidence to be reasonably certain.
Some categories require stricter evidence than others. CMS detection is particularly noisy, requiring two distinct pieces of evidence before reporting a result prevents the most common false positives.
Marketing tools and analytics are even noisier, because sites often load tracking tags through tag managers, creating indirect evidence chains.
Anti-false-positive rules are as important as detection signatures. Knowing that “this signal alone is not enough” prevents the kind of embarrassing errors that erode trust in any detection tool.
Version Extraction
Detection doesn’t stop at identifying a technology, version numbers add significant value.
WordPress embeds its version in the generator meta tag and in URL query params (?ver=6.4.2 on assets).
React’s version appears in the bundle itself. jQuery’s version is typically in its filename (jquery-3.7.1.min.js). PHP version comes from the X-Powered-By header when PHP hasn’t been configured to hide it.
Version detection matters because it enables vulnerability research, dependency auditing, and migration planning, use cases that go well beyond simple curiosity.
Putting It Together
The full detection pipeline for a single site looks something like this:
1.Fetch the page, mimicking a real browser (User-Agent, Sec-Fetch headers) with redirect following and a body size cap
2.Parse HTML with a proper parser, falling back to regex if parsing fails
3.Build an asset list: all script srcs, link hrefs, inline script content snippets, meta tags, data attributes
4.Run all signatures against the HTML, asset list, and response headers simultaneously
5.Perform DNS CNAME/NS lookups for hosting detection
6.Score each detected technology by signal strength and source
7.Apply consolidation rules to remove duplicates and suppress low-confidence results in noisy categories
- Extract version numbers from whatever signals contain them
The result is a structured report that tells you not just what a site uses, but how confident the detector is, and what evidence it found.
I built WebReveal to automate this entire process. It’s free, scans in real time, and covers 50+ technology categories across 1,000+ signatures. If you’re curious how a specific site is built, it takes about three seconds to find out.
Top comments (0)