Let me be honest with you: I shipped security vulnerabilities to production and didn't know about them for months.
I've been building an AI-powered platform mostly solo for a while now. What started as a side project has grown into a legitimate SaaS product with paying users, a tiered subscription system, and a codebase that I sometimes have to squint at to remember what a file does.
Here's what the stack looks like today:
- Frontend: HTML, CSS, JavaScript with Vite for bundling
- Serverless backend: 30+ Edge Functions and serverless functions
- Database & auth: Firebase (Firestore + Firebase Auth)
- Compute: Separate cloud servers for heavier processing tasks
- Integrations: OAuth flows, payment webhooks, external AI model APIs, PDF generation, code execution sandboxing
It's a lot. And like most solo developers who are simultaneously the product manager, the frontend dev, the backend dev, the designer, the support person, and the marketer... security reviews kept falling off my to-do list.
"I'll do a proper audit next sprint."
"Let me just ship this feature first."
"It's fine, the endpoints check the origin header."
Spoiler: it was not fine.
The Problem: You Can't Secure What You Can't See
Here's the thing about serverless architectures that nobody warns you about when you're starting out: every internal function call is an HTTP request.
In a traditional backend, your payment handler calls your user service calls your database layer - all in-process, all behind one API gateway. In my setup, I have edge functions calling serverless functions calling other functions, all communicating over HTTP through publicly-accessible URLs. Each of those internal calls is technically a public endpoint.
When you're building fast and shipping features, you don't think about that. You think: "This function is only called by my other function, so it's fine." But "only called by" is a design intention, not a security control. Anyone on the internet can call any of those endpoints if they know the URL pattern.
I knew this in theory. I even had some origin-checking logic in place. But I'd never done a systematic audit to verify that every sensitive endpoint was actually locked down. With 30+ serverless functions and dozens of internal calls between them, manually tracing every path felt overwhelming.
That's where Amazon Quick came in.
What Is Amazon Quick?
Before I get into the audit, let me explain the tool for those who haven't used it.
Amazon Quick is a desktop AI companion from AWS. It sits on your machine, runs locally, and connects to your actual work environment - your files, your messaging, your email, your calendar. It's not a web app you paste code into. It's a desktop application with direct access to your filesystem.
What makes it useful for something like a security review is that it's not working from snippets you copy-paste into a chat window. It has the full picture:
- It can read your entire codebase because you grant it folder access on your machine
- It can search inside files using content search and semantic indexing
- It can write and edit files directly - no copy-pasting fixes back and forth
- It can run code in a sandboxed Python environment for analysis and batch operations
- It learns your project context through custom instructions you provide about your stack, conventions, and guardrails
It's the difference between showing someone a screenshot of your code and sitting them down at your desk with the full repo open.
Step 1: Setting Up the Project Context
Amazon Quick lets you set custom instructions that shape how it behaves for your specific project. Think of it like onboarding a new team member - you tell them how your project is structured, what your conventions are, and what they should and shouldn't touch.
This is the part most people rush through. They jump straight into "review my code" without any context. I've learned (the hard way, on other tasks) that spending 20 minutes on good instructions saves you hours of mediocre back-and-forth.
Here's an abbreviated version of what I configured:
# Project Agent Instructions
## Core Principles
- Be concise, practical, and action-oriented
- Ask 1-3 clarifying questions only when needed
- Prefer safe, incremental changes over large rewrites
- Respect existing patterns, file structure, and naming
- Never invent features; confirm requirements when unsure
## Project Context
- Frontend: HTML, CSS, JavaScript, Vite-based assets
- Serverless: Edge Functions and serverless functions
- Integrations: Firebase (auth, firestore), external APIs
- Multiple static HTML pages and feature pages
## Security Checklist
- Do not log secrets or PII
- Validate and sanitize all inputs (especially serverless endpoints)
- Enforce auth and authorization on protected actions
- Apply rate limiting where abuse is possible
- Use environment variables for secrets
- Flag risky patterns (eval, direct HTML injection, unrestricted CORS)
## Do Not Touch (Without Explicit Permission)
- Billing, pricing, or payment logic
- Authentication and authorization flows
- Database schema, security rules, deployment config
- API keys, secrets, or key-management code
## Output Format
For security reviews: findings list → severity → recommended fix
Always include: what you changed, where, why, and any risks
A few things I want to highlight about this approach:
The "Do Not Touch" section is critical. Quick has read and write access to my project folder. It can edit files directly. I need to be explicit about what it can analyze but not modify without asking first. This isn't about preventing mistakes - it's about building a workflow where I stay in control of the high-stakes decisions.
The conventions section saves time. When Quick finds an issue and writes a fix, it needs to know where the fix goes. Telling it your project layout means it won't create random files in the wrong place.
The output format sets expectations. "Findings list → severity → recommended fix" means I get structured output I can prioritize, not a rambling essay about security best practices.
After writing the instructions, I granted Quick access to my project folder. It could now read every JavaScript file, every HTML page, every config file - the full codebase sitting on my machine.
Step 2: The Audit - Watching Quick Think Through My Code
I told Quick: "Run a thorough security review of the project."
What happened next wasn't a single pass. Quick broke the work into distinct phases, and watching it work was honestly fascinating. It approached the problem the way a senior security engineer would.
Phase 1: Reconnaissance
First, it read the top-level project structure and key config files. It was building a mental model of the architecture: what technologies are in play, how things are deployed, where the trust boundaries are.
This matters because a security review without understanding the architecture is just pattern-matching. Quick needed to understand that my edge functions run on one runtime with one environment variable API, my serverless functions run on another, and my external compute server is a separate trust boundary entirely.
Phase 2: Critical File Deep Dive
Next, it identified and read about 20 files it considered security-critical:
- Authentication logic - how users log in, how sessions work
- Database rules - what's allowed and denied at the data layer
- Security configuration - any existing security utilities
- Serverless functions handling sensitive operations: key management, tier validation, webhook processing, URL fetching, code execution
- Client-side configuration - what's exposed to the browser
It wasn't reading every file in the project. It was making smart choices about what's likely to have security implications. A marketing landing page? Probably fine. The function that handles payment webhooks? Read that very carefully.
Phase 3: Pattern Scanning
Then it searched the entire codebase for known anti-patterns:
- Hardcoded secrets or API keys in source files
- Missing input validation on serverless endpoints
- Dangerous function usage (
eval, direct HTML injection) - Unsafe data storage patterns
- CORS misconfigurations
- Missing authentication checks
This is the "grep for bad things" phase, but smarter. It understood context. Not every innerHTML usage is dangerous. Not every string that looks like a key is actually a key. Quick filtered the noise and focused on actual issues.
Phase 4: Call Chain Tracing
This is the phase that blew my mind.
Quick ran a codebase-wide search to map out every function-to-function HTTP call in the project. It found 94 individual internal service calls spread across 24+ files. Edge functions calling serverless functions, serverless functions calling other functions, external servers calling back into the main stack.
For each call, it checked:
- Is the receiving endpoint authenticating the request?
- Is the caller sending proper credentials?
- What data is being exchanged?
- Could an external attacker replicate this call?
This is work that would take a human hours of grep, find, file-hopping, and mental bookkeeping. Quick did it systematically in minutes and presented the results as a structured map.
Phase 5: Report
Finally, it compiled everything into a structured security audit. 11 findings, each with a severity rating, explanation of the risk, and recommended fix. It even generated an interactive HTML report I could open in a browser tab to review alongside the code.
Total audit time: about 15 minutes. For a codebase with 30+ serverless functions, dozens of internal call chains, multiple deployment targets, and several external integrations.
Step 3: The Findings - 11 Security Issues
I'm describing these at a level that's educational without being a how-to guide for attackers. If you're building a serverless app, check your own code for these patterns.
🔴 Critical: Fix Immediately
1. Insufficient Authentication on Internal API Endpoints
The most severe finding. Multiple server-to-server endpoints that handle sensitive operations - the kind that return credentials, manage access levels, control what users can do - were relying on a protection mechanism that looked secure in the code but was trivially bypassable by anyone who understands how HTTP works.
This is one of those bugs that works perfectly during normal usage. Your frontend calls the endpoint, the check passes, everything works. But it completely falls apart under adversarial conditions. It's the security equivalent of a lock that only works when people use the doorknob.
2. Unprotected Usage Management Endpoint
A critical endpoint responsible for managing user quotas, tracking consumption, and enforcing rate limits had no authentication whatsoever. Anyone who knew the URL could manipulate usage counters, bypass rate limits, or mess with other users' quotas.
This one hurt to read. I'd written the business logic perfectly - credit tracking, tier enforcement, monthly resets. But I forgot to put a lock on the front door.
🟠 High: Fix Soon
3. Redundant Secret Storage
I was doing the right thing by hashing sensitive credentials before storing them in the database. But I was also storing the raw plaintext version alongside the hash. Probably a leftover from debugging ("let me store the raw value too so I can verify the hash is working") that never got cleaned up.
A great example of something a basic scanner might miss (there's no "vulnerability" per se, just a bad practice) but a contextual review catches immediately. Quick understood what the hash was for and flagged the plaintext field as defeating the purpose.
4. Server-Side Request Forgery (SSRF) Risk
Two functions accept URLs from users and fetch them server-side - one for scraping web content, one for proxying images. Neither restricted what addresses could be fetched. In a cloud environment, this is a known risk category because server-side requests can reach internal infrastructure that public requests cannot.
5. Webhook Verification Bypass
The payment webhook handler had proper cryptographic signature verification - good. But it also had a fallback path that used a weaker verification method, probably added during development when I was testing without proper signatures. An attacker who could figure out the fallback value (not as hard as you'd think) could forge webhook events and grant themselves premium access.
6. Data Routing Mismatch
An admin function was writing to database records using one identifier scheme, but the actual schema expected a different identifier as the document key. Updates could silently land on the wrong records - or create orphaned documents. A logic bug with security implications, since it could affect who has access to what tier.
🔵 Medium and Lower
The remaining five covered:
- Output sanitization patterns that could be tightened (especially for AI-generated content rendered in the browser)
- CORS configuration that was more permissive than necessary at the infrastructure level
- Input sanitization gaps in a code execution sandbox
- An unauthenticated file upload URL generator
- A database query pattern with both performance and timing side-channel implications
Not "hair on fire" urgent, but the kind of things that compound over time if left unaddressed.
Step 4: The Fix - 94 Patches Across 24 Files
Finding bugs is useful. Fixing bugs across a 30+ function codebase in one session is something else.
The Strategy
Quick proposed replacing the weak authentication on internal endpoints with a proper shared secret for server-to-server communication:
- Generate a cryptographically strong random secret
- Store it as an environment variable on every platform that hosts your code
- Every internal HTTP call includes the secret in a custom header
- Every receiving endpoint validates the secret before processing
- No secret, no access - doesn't matter where the request comes from
Simple, battle-tested pattern. The challenge wasn't the concept - it was the execution at scale. 94 fetch calls across 24 files, running on different runtimes with different environment variable APIs.
The Vulnerable Pattern
Here's a simplified version of the problem. My serverless functions were calling each other over HTTP, and the "authentication" on the receiving end was insufficient:
// BEFORE: weak validation on the receiving endpoint
exports.handler = async (event) => {
const { action } = JSON.parse(event.body);
// This was the only protection on sensitive actions
if (isSensitiveAction(action)) {
const origin = event.headers.origin || '';
if (!ALLOWED_ORIGINS.includes(origin)) {
return { statusCode: 403, body: 'Unauthorized' };
}
}
// Returns sensitive data, modifies records, etc.
if (action === 'get-config') {
return {
statusCode: 200,
body: JSON.stringify(await fetchSensitiveConfig())
};
}
};
The check looks like it's doing something, and during normal usage it works perfectly. But the underlying assumption - that only trusted code can call this endpoint - doesn't hold up against anyone with curl.
The Server-Side Fix
// AFTER: proper server-to-server authentication
const INTERNAL_SECRET=[REDACTED_PASSWORD]
exports.handler = async (event) => {
const { action } = JSON.parse(event.body);
if (isSensitiveAction(action)) {
const provided = event.headers['x-internal-secret'] || '';
if (!INTERNAL_SECRET || provided !== INTERNAL_SECRET) {
console.log(`[SECURITY] Blocked ${action}`);
return {
statusCode: 403,
body: JSON.stringify({ error: 'Unauthorized' })
};
}
}
// Now we know the caller is our own infrastructure
if (action === 'get-config') {
return {
statusCode: 200,
body: JSON.stringify(await fetchSensitiveConfig())
};
}
};
Key details: the secret comes from an environment variable (never hardcoded), the check validates against both missing secrets and wrong secrets, and security events get logged for monitoring.
The Client-Side Fix (94 Call Sites)
Every internal fetch call needed the secret header:
// BEFORE
const response = await fetch(INTERNAL_API_URL, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ action: 'sensitive-action', userId })
});
// AFTER
const response = await fetch(INTERNAL_API_URL, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-Internal-Secret': getEnvSecret('INTERNAL_SECRET')
},
body: JSON.stringify({ action: 'sensitive-action', userId })
});
One subtle detail: edge functions and serverless functions in my stack use different runtime APIs to access environment variables. Quick handled this correctly for every single file - it knew which runtime each function ran in and used the right API call.
The Batch Patching Process
Here's how Quick handled 94 changes without missing one:
1. Pattern analysis. It confirmed every internal fetch call followed the same structure: a fetch() with a headers object containing 'Content-Type': 'application/json'. This consistency meant it could write a reliable batch patch.
2. Targeted script. Rather than editing each file by hand (error-prone at this scale), it wrote a script that found every relevant fetch call, identified the headers object, injected the secret header, and handled the runtime-specific env var difference automatically.
3. Verification pass. After patching, it scanned the entire codebase again:
=== VERIFICATION ===
edge-functions/coder.js: 8 patched, 0 missing
edge-functions/chat.js: 8 patched, 0 missing
edge-functions/app-chat.js: 5 patched, 0 missing
edge-functions/api.js: 4 patched, 0 missing
edge-functions/reason.js: 11 patched, 0 missing
edge-functions/video.js: 2 patched, 0 missing
edge-functions/agent.js: 2 patched, 0 missing
... and 17 more files
Total: 94 patched, 0 missing ✅
4. False positive detection. The scanner flagged one line as potentially unpatched. Quick investigated and found it was a call to an external third-party API that happened to sit in the same file near an internal call. It correctly identified the false positive and left it alone.
That last bit - catching its own scanner's mistake - is exactly the kind of thing that prevents "the fix broke something else" situations.
Full Scope
| Category | Files | Calls Patched |
|---|---|---|
| Edge Functions (core) | 9 | 42 |
| Edge Functions (features) | 11 | 38 |
| Serverless Functions | 4 | 10 |
| External Server | 2 | 4 |
| Server-side endpoints | 2 | - |
| Total | 28 | 94 |
Step 5: Deployment
After the code changes, Quick generated a deployment checklist:
Generate the secret. A cryptographically random 64-character string. Long enough to be unguessable, safe for environment variables.
Set environment variables on every platform that runs your code. One value, shared across all runtimes, so every internal service can both send and verify it.
Deploy. Push the code, rebuild, restart the external server.
Verify. Hit a sensitive endpoint from outside the infrastructure without the secret. Confirm it returns
403 Forbidden.
The whole deployment took about 10 minutes. Most of that was waiting for the build.
What This Experience Taught Me
Serverless Doesn't Mean Secure by Default
I think a lot of us have a subconscious assumption that serverless = managed = someone else handles security. And that's partially true - you don't patch operating systems or manage TLS certificates. But application-level security is still 100% your responsibility.
In serverless, the attack surface is actually larger than a monolith in some ways. Every function is a separate endpoint. Every internal call crosses a network boundary. There's no single API gateway enforcing authentication on internal traffic unless you build one. You have to secure each function individually, and when you have 30+ of them, it's easy to miss one.
The "It Only Works Internally" Assumption Is Dangerous
I had multiple endpoints that were "only called by other functions." No user-facing UI pointed to them. No documentation listed them. They felt internal. But they were all publicly-accessible URLs, discoverable by anyone poking around at URL patterns.
Security through obscurity isn't security. If an endpoint does something sensitive, it needs proper authentication - even if you think nobody will ever find it.
AI-Assisted Security Reviews Are a Legitimate Workflow
I was skeptical going in. I expected Quick to find the obvious stuff (hardcoded strings, missing try/catch) and miss the architectural problems. Instead, it:
- Understood cross-file dependencies and traced call chains across 24+ files
- Recognized runtime differences and used the correct environment variable API for each
- Prioritized correctly - the severity ratings matched what a human security engineer would say
- Implemented at scale - didn't just find 94 problems, it fixed all 94 with correct, context-aware patches
- Self-verified by running a verification pass and catching its own false positive
Is it a replacement for a professional penetration test? No. But for a solo developer who needs to go from "I think my code is probably fine" to "I've systematically verified and fixed my internal auth," it's remarkably effective.
Good Agent Instructions Are an Investment
Twenty minutes writing project instructions saved me hours. Quick never proposed changes to files I marked as off-limits. It used the correct file paths and conventions. It formatted output exactly how I asked. It asked for confirmation before touching anything sensitive.
The difference between "here's an AI, good luck" and "here's an AI that knows your project structure, your conventions, and your boundaries" is the difference between generic advice and actionable fixes.
Regular Audits Are Now Feasible for Small Teams
My old security review cycle:
- Know I should do an audit
- Keep putting it off because it's time-consuming
- Eventually something bad happens
- Panic
My new cycle:
- Open Quick, it already has access to the project folder
- Run the audit (~15 minutes)
- Review findings and fix (1–2 hours)
- Deploy
- Repeat monthly
That's a cadence I'll actually stick to.
Try This Yourself
If you want to run a similar audit on your own project:
1. Install Amazon Quick and grant folder access. It's a desktop app - once installed, go to Settings → My Computer → Local Folders and add your project directory. Quick can now read, search, and write files in that folder directly.
2. Write project instructions. Set custom instructions that describe your tech stack, file structure conventions, security concerns specific to your architecture, what Quick shouldn't touch without asking, your desired output format, and escalation triggers. Don't skip this step - the quality of the audit is directly proportional to the quality of context you provide.
3. Make sure the folder covers your full project. Config files, serverless functions, frontend code, utility libraries - it should all be under the allowed folder. The more Quick can see, the better it can trace dependencies and find issues that span multiple files.
4. Ask for a security review. Be specific: "Check all serverless functions for authentication, input validation, and access control issues. Check for secrets exposure, SSRF risks, and unsafe data handling patterns."
5. Review, fix, verify, deploy. Read each finding. Ask follow-up questions. Quick can edit the files directly and run verification scripts - no copy-pasting needed. Ship it.
The whole process - from granting folder access to deploying fixes - took me one focused evening. Your mileage will vary depending on codebase size, but the point is: this is hours, not weeks.
Final Thoughts
I shipped security vulnerabilities to production. That's embarrassing to admit publicly, but I know I'm not the only solo developer who's done it. When you're building fast, security reviews feel like a luxury - something big companies with dedicated security teams do.
Amazon Quick changed that for me. What used to be a "someday" task is now part of my regular workflow. The 11 vulnerabilities it found were real, the fixes it implemented were correct, and my users' data is safer because I finally stopped procrastinating.
If you've been putting off a security review of your own project - stop putting it off. Grant Quick access to your project folder. Ask it to review. See what it finds.
You might not like what comes back. But you'll sleep better after fixing it.
Top comments (0)