TLDR: Most people imagine pen testing as a montage of terminals, complex exploits, and hours of deep technical work. The reality is that the first 10 minutes are almost always the most revealing. I run the same opening checklist on every web application I assess — and in those 10 minutes, I almost always find 2 or 3 things that a real attacker would exploit before they even get to the sophisticated stuff. Here's exactly what that checklist looks like, and how you can run it on your own application today.
Why the First 10 Minutes Tell You So Much
There's a principle in security that's uncomfortable but consistently true: the most dangerous vulnerabilities in your application are usually the obvious ones. Not because your team is careless — but because obvious things are easy to miss when you're deep in feature development, operating under deadline pressure, and focused on what your application does rather than what it shouldn't allow.
An attacker approaching your application cold has no context, no assumptions, and no attachment. They look at the surface before they try to break through it. They check what you've accidentally left visible before they try to find what's deliberately hidden.
That's exactly how I start every assessment. No tools running yet. No automated scans. Just a browser and a clear mental checklist.
Here's what's on it.
The 9-Point Opening Checklist
1. HTTP to HTTPS Enforcement and Cookie Security
First thing I do: type http:// (not https) in front of the domain. Does the application redirect? Does it redirect with a 301 (permanent) or a 302 (temporary)? Is HTTP Strict Transport Security (HSTS) set in the response headers?
Then I log in and open DevTools. I look at every cookie the application sets. Three questions: Is the Secure flag set (cookie only transmitted over HTTPS)? Is HttpOnly set (JavaScript can't read it)? Is SameSite configured to Strict or Lax?
A session cookie missing any of these flags is a vulnerability I'll document in every report. It's also one of the simplest fixes in existence.
2. Security Response Headers
Ctrl+Shift+I. Network tab. I load the application and look at the response headers on the main document. Six headers tell me an enormous amount in under 60 seconds:
-
Content-Security-Policy— absent or set to*means XSS mitigations are wide open -
X-Frame-Optionsorframe-ancestorsCSP — absent means clickjacking is possible -
X-Content-Type-Options: nosniff— absent means MIME-type sniffing attacks are viable -
Referrer-Policy— absent means sensitive URLs in the referer header leak to third parties -
Permissions-Policy— reveals what browser APIs the application uses -
ServerandX-Powered-By— if these are present, they're telling me your web server version and framework. That's free reconnaissance I didn't have to work for.
Missing security headers are a quick win for attackers and a quick fix for developers. They're also almost always present in the findings of every assessment I run.
3. robots.txt and sitemap.xml
Every pen tester checks these. Attackers do too — it takes three seconds.
/robots.txt was designed to tell search engines which paths not to index. It's essentially a publicly available map of paths you consider sensitive. I've found admin panels, internal API endpoints, staging directories, and backup locations all listed in robots.txt files on production applications.
/sitemap.xml gives me a complete list of every URL the application wants indexed. It tells me the full scope of the application before I've done any discovery work myself.
Neither of these is a vulnerability by itself. But both reliably point me toward the most interesting parts of the application within the first two minutes.
4. IDOR Check on Every Visible ID Parameter
The moment I see a URL like /account/profile?id=1042 or /invoice/download?ref=8834, I open a second browser, log in as a different user, and try to access those same URLs.
If I get the first user's data in the second user's session — that's an IDOR. Full stop. This is Broken Access Control, #1 on OWASP 2025 for the fifth consecutive year, and it's how 10 million Optus customer records were stolen by incrementing a single integer.
I also check whether IDs are sequential integers. If they are, even a fully authenticated endpoint is at higher risk — because enumeration doesn't require guessing. We covered this pattern in depth in this post on IDOR vulnerabilities.
5. JavaScript File Review
Modern web applications ship enormous JavaScript bundles to the browser. The developer's intent is to send the frontend code. What often comes along for the ride: internal API endpoint paths, environment variable names, hardcoded API keys, commented-out debug code, and internal service URLs that were never meant to be public.
I open the browser's Sources tab, look through the loaded JS files, and run a quick search for strings like api_key, secret, token, internal, admin, and TODO. You would be surprised how often this surfaces something useful in under five minutes. We've written about what happens when secrets make it into code — the same pattern applies to client-side JavaScript.
6. Password Reset Flow
Authentication flows are where I spend serious time later in an assessment, but the password reset flow gets a quick check early because it fails so consistently. I specifically look for: Does the reset token expire? Can I reuse a token after it's been used once? Is there a rate limit on reset requests, or can I flood the endpoint? Is the token short enough to brute-force?
The weakest reset flows I've seen use six-digit numeric tokens with no expiry and no rate limiting. That's 1,000,000 possible combinations and unlimited attempts — a brute-force that takes minutes. We covered why broken authentication shows up this consistently and what a secure implementation looks like.
7. Error Messages and Information Disclosure
I start poking at inputs with values they weren't designed to handle. A single quote in a search field. A letter in a numeric ID field. A negative number in a quantity field. An oversized string in a text input.
What I'm looking for is what the application says when it breaks. Does it return a generic "something went wrong" message, or does it return a stack trace showing me your framework version, your file paths, your database schema, and your internal IP addresses?
Verbose error messages are free reconnaissance for an attacker. They're also a misconfiguration finding that belongs in every report because it directly accelerates every other attack.
8. Subdomain Enumeration
Still in that first 10 minutes, I'll run a quick passive subdomain check. Tools like SecurityTrails, crt.sh (certificate transparency logs), and DNSdumpster surface subdomains without sending a single packet to the target. I'm looking for: staging environments, old API versions, admin panels, internal tools, and forgotten development servers.
The Internet Archive breach we covered here started with a forgotten development subdomain. This is not an edge case — it's one of the most reliable findings on every assessment I run.
9. Unauthenticated LLM and AI Endpoints (The 2026 Addition)
This one didn't exist on my checklist two years ago. Now it's standard.
If I can tell from the application's functionality or JavaScript that it's using an LLM backend — a chat feature, an AI assistant, a document summarisation tool — I immediately look for the API endpoint that talks to it. I check whether it's authenticated. I check whether I can call it directly without a user session. I check whether it has rate limiting. I check whether it's proxied through the application's own backend or hitting OpenAI/Anthropic directly with a hardcoded key in the client-side JavaScript.
Unauthenticated LLM endpoints are how LLMjacking attacks happen. And the API security blind spots that affect standard endpoints are even more prevalent in AI feature implementations because they're often built quickly by teams without a security background.
What These Findings Tell Me About the Rest of the Engagement
Here's the part that matters most for engineering leaders: when I find issues in these first 10 minutes, it's not because I've found the edge cases. It's because I've found the surface layer. These are the things a competent attacker finds in their first pass before they've even started trying.
If an application fails on three or four of these checks, it tells me the rest of the assessment is going to be thorough. It suggests that security wasn't a structured part of the build process — that it was assumed to be handled rather than explicitly designed in.
If an application passes most of these cleanly, I know I'm working with a team that has thought about security at the implementation level. The assessment gets more interesting from there — we start finding the architectural and logic issues that take more effort — but the low-hanging fruit is gone.
The full scope of what we test beyond this is covered in our complete web app pen test checklist. The first 10 minutes are just the opening conversation.
How to Run This on Your Own App Right Now
You don't need any specialist tools for most of this checklist. A browser with DevTools open, a second test account, and the URLs crt.sh, dnsdumpster.com, and your own application are enough to cover roughly half of it in under 30 minutes.
Open your application. Work through each of the nine points above. Write down what you find. If anything flags — a missing security header, an IDOR that works across accounts, a subdomain you'd forgotten about — that's something worth addressing before you let a real attacker find it.
How Kuboid Secure Layer Can Help
The first 10 minutes are a free check you can run yourself. What comes after requires a structured methodology, adversarial thinking, and experience with what the findings in the surface layer usually lead to underneath.
Our web application penetration tests start with this checklist and go significantly further — covering authentication logic, business logic flaws, server-side vulnerabilities, and the full OWASP Top 10:2025 framework. If you'd like to see exactly what we look for across a full engagement, book a free consultation and we'll walk you through our process.
If you've ever run a quick security check on your own application — even informally — what did you find? I'm genuinely curious. Drop a comment below. The most common answer is "more than I expected."
Top comments (0)