In 2016, a researcher found that a2.bime.io had a CNAME record pointing to bimeio.s3.amazonaws.com. The bucket bimeio did not exist. It was not owned by Bime. It was not owned by anyone.
The researcher created the bucket in their own AWS account. a2.bime.io was now serving their content — under Bime's domain, with Bime's SSL certificate, trusted by Bime's users.
This is HackerOne #121461. The fix was either claiming the bucket name or deleting the CNAME. Either takes under a minute. The window between bucket deleted and researcher claimed it was measured in days.
Why This Attack Requires Nothing
S3 bucket names are globally unique across all AWS accounts. When a bucket is deleted, the name becomes available to any AWS account immediately. If a DNS CNAME still points to that bucket's S3 endpoint, whoever registers the name first controls what the DNS record resolves to.
The attack requires no credentials, no exploit, no social engineering:
# Step 1: find the dangling CNAME
dig a2.bime.io
# a2.bime.io → bimeio.s3.amazonaws.com
# Step 2: check if the bucket exists
aws s3 ls s3://bimeio 2>&1
# NoSuchBucket
# Step 3: register it
aws s3 mb s3://bimeio --region us-east-1
# make_bucket: bimeio
# a2.bime.io now serves your content
Three commands. No special access. The domain is yours until Bime notices.
The Gap Traditional Tools Cannot See
CSPM tools inventory S3 buckets in the organization's AWS accounts. When a bucket is deleted, it disappears from the inventory. The scan finds nothing wrong — because there is nothing in the account to scan. The bucket does not exist.
The DNS record is in Route53 or Cloudflare or a registrar's control panel. It is not an AWS resource. It does not appear in AWS Config. It does not appear in Security Hub. It does not appear in any CSPM finding.
The NoSuchBucket response that a2.bime.io was returning is a valid HTTP response — monitoring does not alert on it. It looks like an outage, not a vulnerability.
The gap sits between two inventories: the AWS account (which has no bucket) and the DNS zone (which has a CNAME). Neither flags the mismatch. The organization has no tool that cross-references DNS records against S3 bucket ownership.
Why Teams Miss This
The sequence is common. A team deploys a feature using S3, sets up the CNAME, ships it. The feature is deprecated. The bucket is deleted. Deleting the bucket is in the AWS console. Removing the CNAME is in the DNS provider — a different system, often a different team. The CNAME removal is a separate task that does not block the deployment and gets forgotten.
Months later, nobody remembers that a2.bime.io exists. It does not appear in any active service inventory. It does not generate any alerts. It sits in the DNS zone file, pointing at nothing, waiting.
The System Invariant
The invariant is precise:
Every DNS CNAME pointing to an S3 endpoint must reference a bucket that exists and is owned by the same organization.
Observable in a snapshot without making any change to the infrastructure: the DNS record points to bimeio.s3.amazonaws.com, the bucket bimeio does not exist in the account inventory, the name is claimable. That is the full finding — no live exploitation required.
What Stave Detects
Stave models the DNS-to-S3 reference as a first-class asset with two properties:
{
"id": "bime-a2-cname-ref",
"type": "s3_bucket_reference",
"properties": {
"s3_ref": {
"endpoint": "a2.bime.io",
"bucket": "bimeio",
"bucket_exists": false,
"bucket_owned": false
}
}
}
The control evaluates the reference, not the bucket:
id: CTL.S3.BUCKET.TAKEOVER.001
name: Referenced S3 Buckets Must Exist And Be Owned
unsafe_predicate:
any:
- field: properties.s3_ref.bucket_exists
op: eq
value: false
- field: properties.s3_ref.bucket_owned
op: eq
value: false
Either condition alone fires the control. Both being false — bucket does not exist and is not owned — means the name is available for registration by anyone.
The E2E Test
This report is one of 28 end-to-end tests in Stave's test suite. The test reconstructs the exact Bime configuration — a s3_bucket_reference asset with bucket_exists: false and bucket_owned: false — across two snapshots spanning 8 days, runs stave apply, and compares output byte-for-byte against a golden file.
./stave apply \
--controls testdata/e2e/e2e-h1-bime-121461/controls \
--observations testdata/e2e/e2e-h1-bime-121461/observations \
--max-unsafe 168h \
--now 2016-03-18T00:00:00Z
Expected output:
Status: NON_COMPLIANT
Finding: CTL.S3.BUCKET.TAKEOVER.001 — bime-a2-cname-ref
Unsafe for 192 hours (threshold: 168 hours)
Misconfigurations: bucket_exists=false, bucket_owned=false
Exit code: 3
The test proves that CTL.S3.BUCKET.TAKEOVER.001 detects the exact configuration state that enabled the Bime takeover — not in theory, but by evaluating a reconstructed snapshot against the control predicate with a golden file proving the output.
The Asset Type Distinction
The finding is on bime-a2-cname-ref, not on an S3 bucket. The asset type is s3_bucket_reference — the DNS record that points to S3, not the bucket itself.
This distinction matters. The bucket does not exist in any account. A bucket-level scanner has nothing to evaluate. The vulnerability lives in the reference — the DNS record that points to a name that is no longer owned. Stave models the reference as an asset precisely because the reference creates the risk.
This is the same principle as stave path — Stave reasons about relationships between assets, not just about assets in isolation. A CNAME record and the bucket it points to form a relationship. When the bucket end of that relationship is broken, the CNAME becomes a liability.
Remediation
Two options, both under a minute:
Option A — Claim the bucket name:
aws s3 mb s3://bimeio --region us-east-1
The bucket can be empty. The goal is to claim the namespace before an attacker does. Apply Block Public Access immediately after:
aws s3api put-public-access-block \
--bucket bimeio \
--public-access-block-configuration \
BlockPublicAcls=true,IgnorePublicAcls=true,\
BlockPublicPolicy=true,RestrictPublicBuckets=true
Option B — Remove the CNAME:
aws route53 change-resource-record-sets \
--hosted-zone-id ZONE_ID \
--change-batch '{
"Changes": [{
"Action": "DELETE",
"ResourceRecordSet": {
"Name": "a2.bime.io",
"Type": "CNAME",
"TTL": 300,
"ResourceRecords": [{"Value": "bimeio.s3.amazonaws.com"}]
}
}]
}'
Option A is faster — no DNS propagation delay. Option B is cleaner — removes the unused reference entirely. Do Option B regardless, because the subdomain should not exist if the bucket is empty.
The process fix:
Before deleting any S3 bucket, search DNS records for references to that bucket name. Remove the CNAME before deleting the bucket.
Checklist
- Audit DNS zone for CNAMEs pointing to
*.s3.amazonaws.comor*.s3-*.amazonaws.com - For each: verify the referenced bucket exists and is owned by the account
- Bucket deletion process includes DNS record cleanup as a required step
-
CTL.S3.BUCKET.TAKEOVER.001runs in CI on every infrastructure change - DNS changes and bucket deletions are correlated in change management
The bucket was deleted. The DNS record was not deleted. The attack was three commands.
HackerOne #121461 — Bime S3 bucket takeover via dangling CNAME. Stave E2E test e2e-h1-bime-121461 reconstructs the vulnerable configuration and verifies detection against a golden file. Stave detects dangling S3 bucket references via CTL.S3.BUCKET.TAKEOVER.001, evaluated from local DNS and S3 inventory snapshots without cloud credentials.
Top comments (0)