DEV Community

Yeison Cruz
Yeison Cruz

Posted on

Assessing an AWS Legacy Environment

You just got handed the keys to an AWS account that's been running for... who knows how long. The person who built it? Left the company 18 months ago. The documentation? "It's all in Confluence" (it's not).

Now leadership wants to know: "Can we modernize this?" And you're thinking: "I don't even know what this is yet."

Let's fix that.

Why bother with an assessment?

Because you can't fix what you don't understand. And you definitely can't tell your boss "this will take 6 months and cost $200K" when you haven't even looked under the hood.

An assessment answers three simple questions:

  1. What are we running?
  2. How much is it costing us?
  3. What's about to explode?

That's it. No 50-page reports. No fancy architecture diagrams that nobody reads. Just the facts.

Start with the obvious: What's actually running?

Open the AWS Console. Click around. Seriously.

Go to EC2. How many instances do you have? Are they all running? Do they have names, or are they just "i-0a1b2c3d4e5f" with no tags?

Check RDS. Any databases? What versions? (Spoiler: probably outdated)

Look at Lambda. How many functions? Anyone know what they do?

Pro tip: If nothing has tags, you're in for a rough time. Tags are how you know "this server runs the payment API" vs "this is Steve's test box from 2022 that nobody dared to delete."

Follow the money (Because your CFO will)

Go to Cost Explorer. Look at last month's bill.

What's eating the budget?

  • Is it EC2 instances running 24/7 when they could shut down at night?
  • Data transfer costs because someone's downloading terabytes to their laptop?
  • A NAT Gateway that costs more than the application it supports?

Look for the stupid stuff:

  • Storage volumes attached to nothing (you're paying for ghost hard drives)
  • Load balancers with zero traffic (still $16/month each)
  • Elastic IPs sitting idle ($3.60/month adds up when you have 47 of them)

This is low-hanging fruit. You can save money today just by cleaning up the mess.

Security Check (Before someone hacks you)

You don't need to be a security expert. Just answer these questions:

Can anyone on the internet SSH into your servers? (Check security groups for 0.0.0.0/0 on port 22)

Are your S3 buckets public? (They shouldn't be, but you'd be surprised)

Is anyone using the root account? (They better not be)

When was the last time someone rotated access keys? (If the answer is "never," we have a problem)

AWS has tools for this (Trusted Advisor, Security Hub), but honestly? Just click through the console and look for red flags. You'll find them.

Map the dependencies (AKA "What breaks if I touch this?")

This is the hard part. You need to figure out what talks to what.

Start simple:

  • Users hit a load balancer
  • Load balancer talks to app servers
  • App servers talk to a database
  • Maybe there's an S3 bucket somewhere
  • Probably some Lambda functions doing... something

How do you figure this out?

  • Look at the code (if you can find it)
  • Check CloudWatch logs for traffic patterns
  • Ask the team (the junior dev who's been here 8 months knows more than you think)

Draw it on a whiteboard. Boxes and arrows. Nothing fancy. Just "if I turn off X, does Y break?"

Spot the Technical Debt

Things that should make you nervous:

  • EC2 instances that have been running since 2019 and nobody knows what they do
  • Databases running MySQL 5.6 (end-of-life was years ago)
  • Manual deployments (someone SSHs in and runs commands)
  • No backups (or backups that nobody's ever tested)
  • Everything in one availability zone (one AWS hiccup = you're down)

Things that are costing you money:

  • Servers sized for peak traffic running 24/7
  • Self-managed databases when RDS would be cheaper and easier
  • No auto-scaling (you're paying for capacity you don't need)

Make a list. Prioritize by "what's going to bite us first."

Know your baseline (Or you can't prove you made it better)

Before you change anything, write down how things perform now.

Capture the basics:

  • How fast do pages load?
  • What's the error rate?
  • How much CPU/memory are we using?
  • How long do database queries take?

You need this. Because in 6 months when you've modernized everything, someone will ask "did this actually help?" and you'll want receipts.

What you should have when you're done

A simple document (Google Doc, Notion, whatever) with:

  • List of what's running (and what it costs)
  • Security issues (ranked by "how screwed are we?")
  • Dependency map (even if it's just boxes and arrows)
  • Technical debt list (prioritized)
  • Performance numbers (your baseline)

That's it. No 100-slide PowerPoint. No executive summary that nobody reads. Just the facts.

Don't Make These Mistakes

Don't trust the documentation. That wiki page was last updated in 2021. Verify everything.

Don't do this alone. Talk to developers. Talk to ops. Talk to the person who gets paged at 3 AM. They know where the bodies are buried.

Don't take forever. This should take 1-2 days, not 3 months. You're assessing, not solving. Yet.

Don't skip the boring parts. Counting EC2 instances isn't fun, but it's necessary. You can't modernize what you can't see.

What's Next?

Now you know what you have. Next comes the hard part: deciding what to do with it.

What do you migrate first? What do you retire? What do you leave alone because it works and nobody wants to touch it?

But that's a problem for next week.

For now, go open that AWS Console and start clicking around. You might be surprised (or horrified) by what you find.

And hey, at least you'll finally know what you're dealing with.

Top comments (0)