DEV Community

Subhash Bohra
Subhash Bohra

Posted on

I Was Overpaying for AWS EC2 (Here’s What I Learned)

Ever felt that uncomfortable pause before opening your AWS bill?

That moment where you know something is off, but you still hope it’s just a rounding error.

I’ve been there.

For years, EC2 felt like the safest option. Familiar. Predictable.

But in 2025, while managing a few internal services, I realized something uncomfortable:

I wasn’t paying for performance.

I was paying for idle compute.


The Moment It Clicked

AWS monthly cost dashboard showing idle EC2 spend

Above: Real-world AWS cost dashboard highlighting idle EC2 capacity and fixed infrastructure costs.

Our setup looked reasonable on paper:

  • A small API service
  • A scheduled batch job
  • A webhook listener

All running on:

  • EC2
  • Application Load Balancer
  • Auto Scaling Groups

Traffic was unpredictable — short bursts, long quiet periods.

But the bill?

Consistently loud.


The Hidden Cost of “Always On”

When I dug deeper, a pattern emerged:

  • Instances were idle over 70% of the time
  • ALB and NAT Gateway costs never stopped
  • Nights, weekends, and off-hours were pure waste

That’s when it hit me:

EC2 wasn’t expensive.

Idle EC2 was.


STAR Breakdown (Interview-Ready)

⭐ Situation

Three internal services were running on EC2 with Auto Scaling, despite highly bursty traffic patterns.

⭐ Task

Reduce AWS infrastructure cost without sacrificing reliability or performance.

⭐ Action

  • Identified the least complex service
  • Refactored it into AWS Lambda
  • Exposed it using API Gateway
  • Implemented proper monitoring and secrets management

⭐ Result

  • Significant monthly cost reduction
  • No server patching or scaling rules
  • Cold starts consistently under 200ms
  • Fewer operational alerts and cleaner observability

This one change justified the entire experiment.


The Migration (What I Actually Did)

EC2 vs Serverless architecture diagram

Above: Transition from EC2 + ALB + ASG to Lambda + API Gateway for bursty workloads.

Step 1: Break the Service Apart

Each Lambda function had a single responsibility.

Step 2: API Gateway as the Front Door

Clear request/response contracts, proper routing, and throttling.

Step 3: Observability from Day One

CloudWatch logs, metrics, and alarms were mandatory.

Step 4: Secrets Stayed Out of Code

Used SSM Parameter Store / Secrets Manager.

Step 5: Cold Start Control

Provisioned Concurrency only where latency truly mattered.


Sample Lambda Function (Node.js)


js
export const handler = async (event) => {
  console.log("Incoming request:", JSON.stringify(event));

  return {
    statusCode: 200,
    body: JSON.stringify({
      message: "Hello from Lambda!",
      requestId: event.requestContext?.requestId
    })
  };
};
Enter fullscreen mode Exit fullscreen mode

Top comments (0)