Two different problems
Security tools have blind spots. AWS Trusted Advisor can't detect incomplete observation data, latent exposure behind Public Access Block, or ACL escalation paths. Those are real gaps. If your tool doesn't check for something, you won't find it.
But there's a bigger problem that gets less attention: the tool checks for things individually and misses what happens when they combine.
This is compound risk. It's not a missing check — it's a missing dimension of analysis.
What Trusted Advisor Checks
Trusted Advisor evaluates settings one at a time:
- Is this bucket public? Yes/No.
- Is encryption enabled? Yes/No.
- Is logging turned on? Yes/No.
Each check returns an independent result. The results don't talk to each other. A bucket that passes encryption and fails public access generates two separate findings with no connection between them.
This is useful. It catches the obvious things. But it can't express a statement like: "encryption is enabled AND the bucket is public, which means encryption provides no confidentiality benefit because anyone can read the data through the public endpoint."
That's not a missing check. Both checks exist. What's missing is the relationship between them.
How compound risk caused a $190 million breach
The Capital One breach in 2019 wasn't caused by a single misconfiguration. It was caused by three settings that were individually low-to-medium severity but catastrophic in combination:
- An SSRF vulnerability in the WAF configuration — exploitable but contained if nothing else was wrong
- An overly permissive IAM role on the compromised instance — broad but not dangerous if the instance wasn't reachable
- Unencrypted S3 buckets with sensitive data — risky but not breached if nobody had credentials
Any scanner would have flagged each setting independently. No scanner correlated them into: "an attacker who exploits the SSRF can assume the overly permissive role and read unencrypted customer data from S3."
The breach cost $190 million. Each individual finding would have been prioritized as medium.
The three compound patterns
In the open source security CLI I built, compound risk detection runs after individual control evaluation. It looks for specific combinations that tools checking settings individually will always miss:
Pattern 1: Public access + wildcard IAM policy
[CRITICAL] COMPOUND.001
Triggers: CTL.S3.PUBLIC.001 (FAIL) + CTL.S3.ACCESS.002 (FAIL)
Public access with overly broad IAM permissions — the S3 + IAM
lateral movement pattern present in the majority of documented
AWS breaches.
The bucket is public. The policy allows wildcard actions. Individually they're HIGH severity. Together they're the Capital One pattern: public entry point + broad permissions = full data access.
A checklist that evaluates these independently will list them as two findings on page 4 and page 12 of a 30-page report. An auditor reading sequentially might not connect them. Compound detection puts them at the top as a single critical finding.
Pattern 2: Encryption enabled + public access
[HIGH] COMPOUND.002
Triggers: CTL.S3.PUBLIC.001 (FAIL) + CTL.S3.ENCRYPT.001 (PASS)
Encryption at rest is configured but the bucket is publicly
accessible. Encryption provides no confidentiality benefit
while public access is enabled.
This is the false confidence pattern. The encryption check passes. The team marks "encryption at rest" as compliant. But encryption at rest protects against physical disk theft — it does nothing when the data is served over a public HTTPS endpoint to anyone who asks.
Trusted Advisor would show the encryption check as green. The team would feel secure. The data would be publicly readable.
Pattern 3: VPC endpoint without endpoint policy
[HIGH] COMPOUND.003
Triggers: CTL.S3.NETWORK.VPC.001 (PASS) + CTL.S3.NETWORK.POLICY.001 (FAIL)
VPC endpoint restricts this bucket but the endpoint policy does
not restrict which bucket ARNs are reachable. Any principal on
the VPC can reach any S3 bucket in any account via the endpoint.
The VPC endpoint exists. The network restriction check passes. But the endpoint has no policy limiting which buckets it can access. Any EC2 instance on the VPC can reach any S3 bucket in any AWS account through the endpoint — including buckets in other accounts. The endpoint that was supposed to restrict access became a wormhole.
Why this is harder than adding more checks
Trusted Advisor's blind spots can be fixed by adding checks. Can't detect ACL escalation? Add an ACL check. Can't detect incomplete data? Add a completeness check. Each blind spot has a corresponding check that closes it.
Compound risk can't be fixed by adding more checks. The individual checks already exist. What's missing is the correlation logic that evaluates combinations. This requires a different kind of analysis:
- Run all individual checks first — establish the baseline of pass/fail per control per asset
- Scan for known dangerous combinations — look for specific multi-control patterns on the same asset
- Elevate the combined finding above its individual components — the compound finding is the root cause, the individual findings are symptoms
Trusted Advisor's architecture — independent per-resource checks with no cross-check correlation — structurally cannot do step 2. It's not a feature they haven't built yet. It's a design constraint of evaluating settings in isolation.
The severity math
Individual findings have individual severities. Compound findings have combined severities that don't follow addition.
| Finding A | Finding B | Individual Max | Compound Severity |
|---|---|---|---|
| PUBLIC bucket (HIGH) | Wildcard policy (HIGH) | HIGH | CRITICAL |
| Encryption pass (PASS) | PUBLIC bucket (HIGH) | HIGH | HIGH (false confidence) |
| VPC endpoint (PASS) | No endpoint policy (MEDIUM) | MEDIUM | HIGH (wormhole) |
The compound severity isn't max(A, B). It's a qualitative judgment: "these two findings together create an attack path that neither creates alone." The Capital One breach proved this — three medium findings combined into a $190 million incident.
The $190 million proof of business value
This isn't a theoretical risk model. It's a federal court case.
Capital One had security tools. Those tools found the SSRF vulnerability, the overly permissive IAM role, and the unencrypted S3 buckets. Each was logged. Each was individually categorized as medium severity. Each sat in a findings queue alongside hundreds of other medium findings.
No tool said: "these three medium findings on connected resources form an attack path that leads to 100 million customer records."
The attacker saw the connection. The tools didn't.
The cost of that missing correlation: $190 million in regulatory penalties, legal settlements, customer notification, and remediation. Not from a zero-day. Not from a sophisticated nation-state attack. From three settings that every scanner on the market could detect individually.
Stave's COMPOUND.001 detection pattern — public access + wildcard IAM policy — is the exact pattern that was exploited. It runs against local configuration snapshots in seconds. It's open source. It requires zero live credentials.
The question for any security team is simple: would you rather find this combination before deployment at zero cost, or after a breach at $190 million?
That's not a theoretical ROI. It's a documented federal case with public filings that proves the business value of compound risk detection over individual setting checks. No compliance checklist tool can make this claim, because they structurally cannot detect the relationship between findings.
What this means for tool selection
When evaluating security tools, there are two questions:
- What does it check? — This is the coverage question. More checks = fewer blind spots.
- Does it correlate findings? — This is the compound risk question. Correlation catches the attack paths that cause breaches.
Most tools compete on question 1. They list how many checks they run, how many services they cover, how many CIS benchmarks they implement. That's important but insufficient.
Question 2 is where breaches live. The settings that caused the Capital One breach were all individually detectable. The attack path they created together was not — because no tool was looking at the combination.
Top comments (0)