DEV Community

Cover image for Solved: I made a visual grid that shows your subscriptions sized by how much they actually cost you
Darian Vance
Darian Vance

Posted on • Originally published at wp.me

Solved: I made a visual grid that shows your subscriptions sized by how much they actually cost you

🚀 Executive Summary

TL;DR: Cloud costs often spiral due to ‘cloud sprawl,’ characterized by orphaned resources, unclear ownership, and complex billing. This guide provides three strategies to regain control: quick CLI audits for immediate insights, implementing mandatory resource tagging for comprehensive visibility and dashboarding, and deploying automated ‘Reaper’ scripts for enforcing resource hygiene and cleanup.

🎯 Key Takeaways

  • Cloud sprawl, driven by frictionless resource creation, leads to out-of-control costs through orphaned resources, lack of ownership, and opaque billing.
  • Cloud provider CLIs (e.g., aws ec2 describe-instances) offer a quick, manual way to audit and identify potentially expensive, long-running resources.
  • Implementing mandatory tagging (e.g., ‘Owner’, ‘Project’, ‘Environment’) enforced by IAM policies or Terraform is critical for effective cost allocation, visibility, and dashboarding in cloud cost management tools.
  • Automated ‘Reaper’ scripts (e.g., Lambda functions or cron jobs) can enforce resource hygiene by automatically terminating non-compliant or expired resources, particularly in development environments.

A Senior DevOps Engineer explains the real reasons cloud costs get out of control and provides three practical, field-tested solutions—from quick CLI scripts to automated cleanup—to regain visibility and control.

That ‘Tiny’ Dev Server Just Cost Us a Fortune: A Senior Engineer’s Guide to Cloud Cost Visibility

I still remember the Monday morning meeting. The Director of Engineering’s face was pale. He held up a printout, and even from across the room, I could see the number had a lot of commas. “Can someone,” he said, trying to keep his voice steady, “explain to me why a ‘quick test’ environment cost us more than our entire production database cluster last month?” It turned out a junior engineer, trying to impress everyone, had spun up a massive GPU-accelerated instance for a machine learning experiment on a Friday afternoon. He forgot to turn it off. For ten days, that beast sat there, chewing through our AWS credits at the speed of light. That was the day we stopped *reacting* to cloud bills and started *managing* them.

The Real Problem: Cloud Sprawl is a Silent Killer

Look, the core issue isn’t that the cloud is expensive. It’s that it’s frictionless. Anyone with IAM permissions can spin up a server, a database, a load balancer, or a serverless function in minutes. This is great for agility, but terrible for accountability. The root cause of “bill shock” is almost always a combination of three things:

  • Orphaned Resources: That dev server from a proof-of-concept six months ago that everyone forgot about.
  • Lack of Ownership: When you see a resource named test-instance-01, who do you ask before deleting it? No one knows. So it stays there, forever.
  • Opaque Billing: Cloud provider bills are notoriously complex. Trying to trace a specific line item back to the exact resource that created it can feel like detective work.

It’s not about blame; it’s about a lack of visibility. The Reddit post about visualizing personal subscriptions hit home because we face the exact same problem, just with S3 buckets and EC2 instances instead of Netflix and Spotify. You can’t control what you can’t see.

Three Ways to Fight Back

Over the years, we’ve developed a few strategies to tackle this. Depending on your team’s maturity and how bad the fire is, you can pick the one that fits.

1. The Quick Fix: The Command-Line Audit

Sometimes you just need an answer right now. You don’t have time to set up a fancy dashboard; you just need to find the most expensive running resources. This is where the cloud provider’s CLI is your best friend. It’s a hacky, manual approach, but it’s fast and effective for a quick pulse check.

For example, in AWS, you can run a one-liner to find your biggest, longest-running EC2 instances. This won’t give you exact cost, but sorting by instance type and launch time is a powerful proxy for “most expensive.”

aws ec2 describe-instances \
--query 'Reservations[*].Instances[*].{Instance:InstanceId, Type:InstanceType, LaunchTime:LaunchTime, State:State.Name, Tags:Tags}' \
--output table \
--region us-east-1
Enter fullscreen mode Exit fullscreen mode

Pro Tip: This is a snapshot, not a strategy. It’s great for finding that one forgotten p3.16xlarge instance, but it won’t help you track cost trends over time. It’s a bandage, not a cure.

2. The Permanent Fix: Tag Everything, Dashboard Everything

This is the real, grown-up solution. It requires discipline, but it pays for itself a hundred times over. The strategy is simple: No resource gets created without a minimum set of tags.

We enforce this using IAM policies and Terraform plans. A resource that doesn’t have these tags simply can’t be created. Our mandatory tags are:

Tag Key Purpose Example Value
Owner The person or team responsible. No more guessing who to ask. darian.vance or team-platform
Project The project or service this resource belongs to. user-auth-service
Environment Is this for prod, staging, or just a dev’s experiment? production

Once you have this data, you can finally use your cloud provider’s cost management tools effectively. Go into AWS Cost Explorer (or Azure Cost Management / GCP Billing Reports), group by the ‘Project’ tag, and suddenly your bill makes sense. You can see exactly which services are costing you the most money and build dashboards to track it.

3. The ‘Nuclear’ Option: The Reaper Script

Okay, let’s be honest. Sometimes you inherit an environment that’s a complete mess. Hundreds of untagged resources, and nobody knows what they do. In this scenario, you need to send a strong message. Enter the “Reaper.”

The Reaper is a scheduled script (usually a Lambda function or a cron job) that scans your account for non-compliant resources and terminates them automatically. It’s brutal, but incredibly effective at enforcing hygiene.

The logic is simple. For example, a Reaper for your dev environment might do this every night:

# This is pseudocode, don't just copy-paste this!

def nightly_reaper():
    # Get all EC2 instances in the 'development' OU
    instances = aws.get_all_dev_instances()

    for instance in instances:
        # Check for an 'Owner' tag
        if not has_tag(instance, 'Owner'):
            print(f"Instance {instance.id} has no owner. Terminating.")
            aws.terminate_instance(instance.id)

        # Check for a 'TTL' (Time To Live) tag
        elif has_tag(instance, 'TTL'):
            ttl_hours = get_tag_value(instance, 'TTL')
            if instance_age(instance) > ttl_hours:
                print(f"Instance {instance.id} has expired its TTL. Terminating.")
                aws.terminate_instance(instance.id)
Enter fullscreen mode Exit fullscreen mode

Warning: This is a powerful and dangerous tool. You MUST communicate this policy clearly to your teams before deploying it. Test it extensively in a sandbox account. Start by just logging what it *would* terminate before you let it loose. But if you want to clean up a messy dev environment fast, nothing works better.

Ultimately, getting control of your cloud spend isn’t about finding a magic tool. It’s about building a culture of ownership and visibility. Start with the CLI, build towards a tagging strategy, and don’t be afraid to automate cleanup when you have to. Your finance department will thank you.


Darian Vance

👉 Read the original article on TechResolve.blog


☕ Support my work

If this article helped you, you can buy me a coffee:

👉 https://buymeacoffee.com/darianvance

Top comments (0)