DEV Community

Cover image for How I Built a Serverless Scanner to Find (and Kill) Zombie AWS Resources
Roberto Belotti
Roberto Belotti

Posted on

How I Built a Serverless Scanner to Find (and Kill) Zombie AWS Resources

Every AWS account has zombies.

Not the fun kind. The kind that silently drain your budget while nobody's looking. An EBS volume that was attached to an instance you terminated six months ago. A NAT Gateway routing traffic for a VPC that no longer has any workloads. A Transfer Family SFTP server that was set up for a migration, used once, and forgotten.

I've audited enough accounts to know this isn't an edge case. It's the default. Infrastructure outlives the context that created it. Projects get cancelled, teams move on, POCs never get torn down. But the meter keeps running.

AWS Cost Explorer will tell you what you're spending. It won't tell you why (or whether anyone still needs it). So I built a tool that answers that question.

aws-zombie-hunter is an open-source, container-based Lambda that scans an AWS account for orphaned resources across seven categories, estimates the monthly waste, and writes a structured JSON report to S3.

This article walks through the architecture, the scanner design pattern, the testing strategy, and the things I learned along the way.

Why a Lambda (and not a CLI)

The first version of this in my head was a CLI script. Run it locally, pipe output to a file, done. But that falls apart quickly for anything beyond a hobby project.

A CLI means someone has to remember to run it. It needs credentials on a developer's machine. It doesn't scale to multiple accounts. And when you want to track zombie trends over time (are we getting better at cleaning up, or worse?), you need persistent storage and a schedule.

A Lambda solves all of that. EventBridge triggers it on a schedule (daily, weekly, whatever makes sense). Results go to S3 with a date-based key, so you get historical data for free. IAM handles permissions (read-only, no credentials on laptops). And it costs roughly $0.10/month to run (which is ironic, given that it typically finds hundreds of dollars in waste).

I went with a container-based Lambda (Python 3.12 on the official AWS base image) instead of a zip deployment. The reason is practical: moto, boto3, and the rest of the test/dev tooling push the package well past Lambda's 250 MB zip limit. A container image gives you up to 10 GB and a clean Dockerfile to reproduce the build. It also shows Docker competence in a portfolio project, which doesn't hurt.

The scanner architecture

The core design problem was: how do you scan seven different AWS resource types without the codebase turning into a 500-line if/elif chain?

The answer is a scanner registry with a common interface. Every scanner is a subclass of BaseScanner, which defines the contract:

from abc import ABC, abstractmethod

class BaseScanner(ABC):
    VERSION: str = "1.0.0"

    def __init__(self, session: boto3.Session, regions: list[str]):
        self.session = session
        self.regions = regions

    @property
    @abstractmethod
    def resource_type(self) -> ResourceType:
        ...

    @abstractmethod
    def scan(self) -> list[ZombieResource]:
        ...
Enter fullscreen mode Exit fullscreen mode

Each scanner knows how to detect zombies for one resource type. The EIPScanner looks for Elastic IPs not associated with a running instance. The EBSScanner finds unattached volumes and snapshots not linked to any AMI. The TransferFamilyScanner checks for SFTP servers with zero configured users (or zero file transfers in the last 30 days via CloudWatch metrics).

A registry module auto-discovers all BaseScanner subclasses and runs them in parallel using ThreadPoolExecutor. This keeps the handler clean:

def lambda_handler(event, context):
    config = load_config()
    session = boto3.Session()

    scanners = ScannerRegistry.discover()
    results = ScannerRegistry.run_all(
        scanners, session, config.regions
    )

    report = ScanResult(
        zombies=results.zombies,
        errors=results.errors,
        regions_scanned=config.regions,
        # ...
    )

    save_to_s3(report, config.bucket, config.prefix)

    if config.sns_topic:
        notify(report.summary(), config.sns_topic)

    return report.summary()
Enter fullscreen mode Exit fullscreen mode

Adding a new scanner means creating a new file in src/scanner/, subclassing BaseScanner, and implementing scan(). The registry picks it up automatically. No handler changes, no configuration updates.

What it scans (and how it decides what's "dead")

Seven resource types, each with specific detection criteria:

EC2 Instances — state is stopped. A stopped instance still incurs EBS storage costs for its root and attached volumes, and it's almost always a forgotten resource.

EBS Volumes — not attached to any instance. The volume exists, it has data (maybe), but nothing is using it. Also catches old snapshots not linked to any active AMI.

Elastic IPs — allocated but not associated with a running instance. AWS charges for idle EIPs ($3.60/month each since the February 2024 pricing change), so even a handful adds up.

Transfer Family Servers — SFTP/FTPS servers with zero configured users, or (via CloudWatch) zero FilesIn/FilesOut in the last 30 days. These are expensive ($219/month base cost for a public endpoint) and easy to forget after a one-time migration.

RDS Instances — state is stopped. AWS automatically restarts stopped RDS instances after 7 days (a detail many people miss), so a "stopped" RDS instance is either very recent or has been cycling through stop/auto-restart for months.

Load Balancers — ALBs and NLBs with zero healthy targets. The load balancer exists, it's routing nothing, and it's costing you ~$16/month plus hourly charges.

NAT Gateways — present in a subnet but no active routes point to them in the route table. At $32/month plus data processing charges, an orphaned NAT Gateway is one of the more expensive zombies.

Cost estimation: good enough, not perfect

Every zombie gets an estimated_monthly_cost_usd field. This isn't meant to match your bill to the cent. It's meant to make you go "wait, we're wasting how much?"

The estimation uses a static prices.json file with base prices per resource type and regional multipliers. A stopped t3.medium in us-east-1 gets a different EBS cost estimate than one in ap-southeast-1. It's approximate, but consistently approximate (and that's what matters for prioritization).

I considered pulling real-time pricing from the AWS Price List API, but it's slow, complex, and overkill for a scanner that runs once a day. The static file approach means the Lambda has no external dependencies at scan time (no API calls that could fail or add latency) and the prices are easy to update with a PR.

The S3 output format

Reports land in S3 with a date-based key pattern: {prefix}{YYYY-MM-DD}.json. Running the scan twice on the same day overwrites the previous result (last scan wins, by design).

The JSON structure is designed for downstream consumption. A future report-generator Lambda (not built yet) can read these files to produce trend charts. The format includes a top-level summary (total zombies, total waste, breakdown by severity and type) plus the full list of zombie resources with all the metadata you'd need to act on them:

{
  "scan_id": "a1b2c3d4-...",
  "account_id": "123456789012",
  "scan_timestamp": "2026-05-12T06:00:12Z",
  "regions_scanned": ["us-east-1", "eu-west-1", "eu-central-1"],
  "total_monthly_waste_usd": 1247.60,
  "total_zombies": 12,
  "summary": {
    "by_severity": { "low": 3, "medium": 5, "high": 3, "critical": 1 },
    "by_type": {
      "ec2_instance": 2,
      "ebs_volume": 4,
      "elastic_ip": 2,
      "transfer_server": 1
    }
  },
  "zombies": [
    {
      "resource_type": "transfer_server",
      "resource_id": "s-0abc1234def567890",
      "region": "eu-west-1",
      "reason": "Transfer Family server with 0 configured users",
      "estimated_monthly_cost_usd": 219.00,
      "severity": "high",
      "age_days": 427,
      "recommended_action": "Review and terminate if unused"
    }
  ],
  "errors": []
}
Enter fullscreen mode Exit fullscreen mode

The errors array is important. If one scanner fails (say, you don't have transfer:List* permissions), the Lambda doesn't crash. It logs the error, adds it to the report, and continues with the remaining scanners. Partial results are better than no results.

Testing with moto (62 tests, 90% coverage)

This project has a test suite I'm actually proud of. Every scanner has its own test file, every detection rule has a positive and negative test case, and the full handler flow is integration-tested end to end.

The secret weapon is moto, which mocks AWS services in-process. No LocalStack, no Docker containers for tests, no real AWS calls. You decorate a test with @mock_aws, create fake resources, run the scanner, and assert what it found:

@mock_aws
def test_finds_unattached_ebs_volumes():
    ec2 = boto3.client("ec2", region_name="us-east-1")
    volume = ec2.create_volume(
        AvailabilityZone="us-east-1a",
        Size=100,
        VolumeType="gp3"
    )

    scanner = EBSScanner(
        session=boto3.Session(), regions=["us-east-1"]
    )
    zombies = scanner.scan()

    assert len(zombies) == 1
    assert zombies[0].resource_id == volume["VolumeId"]
    assert zombies[0].estimated_monthly_cost_usd > 0


@mock_aws
def test_ignores_attached_volumes():
    # Create instance, volume gets attached automatically
    ec2 = boto3.client("ec2", region_name="us-east-1")
    ec2.run_instances(
        ImageId="ami-test",
        InstanceType="t3.medium",
        MinCount=1, MaxCount=1
    )

    scanner = EBSScanner(
        session=boto3.Session(), regions=["us-east-1"]
    )
    zombies = scanner.scan()

    # Root volume is attached, so no zombies
    assert len(zombies) == 0
Enter fullscreen mode Exit fullscreen mode

This pattern scales. Each scanner gets a test_finds_* and test_ignores_* pair at minimum, plus edge cases (multi-region, tagged resources, empty regions). The handler integration test mocks S3 and SNS too, verifying the full flow from event to stored report.

Quality gates: mypy --strict with zero errors, ruff for linting and formatting, pytest --cov for the 90% coverage floor. All of this runs in CI before any merge.

Deployment: SAM with a two-phase pattern

The infrastructure is defined in a SAM template.yaml that provisions:

  • The Lambda function (container image from ECR)
  • An S3 bucket for reports
  • An EventBridge rule for scheduled scanning
  • IAM roles with read-only permissions (EC2, EBS, ELB, RDS, Transfer, CloudWatch, S3 write to the report bucket)
  • An optional SNS topic for notifications

Deployment is sam build && sam deploy --guided. The ECR repository for the container image needs to exist before the first deploy, so there's a bootstrapping step documented in the README.

One thing I'd flag: the IAM policy is deliberately read-only for all scanned services. The tool finds zombies. It doesn't kill them. That's a conscious decision (you don't want an automated tool terminating resources without human review, even if they look dead).

What I learned (and what I'd change)

ThreadPoolExecutor was the right call. Scanners are I/O-bound (API calls to AWS), so threading gives you near-linear speedup across scanners. The full scan across seven resource types and three regions takes about 45 seconds. Without parallelism, it was closer to three minutes.

Static pricing beats dynamic pricing for this use case. I initially tried to pull real-time prices from the AWS Price List Bulk API. The API is slow, the response format is bizarre (nested JSON with string-encoded numbers), and the latency added 10+ seconds per scan. A prices.json file that gets updated once a quarter is simpler and good enough.

The scanner registry pattern pays for itself. When I wanted to add NAT Gateway scanning after the initial six scanners, it took exactly one new file and one test file. Zero changes to the handler, zero changes to the registry. That's the kind of extensibility that makes a project maintainable.

Error isolation is non-negotiable. Early versions crashed the entire Lambda when one scanner hit a permissions error. Now each scanner runs in its own try/except block, and failures get logged and recorded without affecting the others. In multi-account setups where IAM policies vary, this is essential.

What's next: a report-generator Lambda that reads the JSON files from S3 and produces charts (cost trends over time, breakdown by resource type, worst offenders). Same bucket, separate function, clean separation of concerns. That's a v2 project.

Try it

The repo is at github.com/biscolab/aws-zombie-hunter. Deploy it, run it once, and I'd be surprised if it doesn't find at least a couple of zombies you didn't know about.

The Lambda costs pennies to run. The zombies it finds cost a lot more.


I also wrote a shorter take on this problem on LinkedIn — the "why" rather than the "how". If you're curious about the project or have ideas for additional scanners, drop a comment or open an issue on the repo.

Top comments (0)