DEV Community

Josh Lee
Josh Lee

Posted on

The 80/20 of AWS (the services that actually matter)

AWS has over 200 services. That number is intimidating. You log into the console, see a wall of icons, and immediately feel like you need a certification just to figure out where to start.

Here's the good news: most companies use the same 10 to 15 services for almost everything. The rest are niche tools for specific problems you probably don't have yet. This is the 80/20 of AWS. The small set of services that handles the vast majority of what you'll actually build.

We're going to walk through each one, explain what it does in plain language, and tell you when you'd reach for it. No deep dives, no architecture diagrams. Just enough to know what's available and when to use it.

A note on the free tier: AWS changed its free tier model in July 2025. If you created your account before July 15, 2025, you get the traditional 12-month free tier with specific service limits. If you signed up after that date, you get up to $200 in credits valid for 6 months. The free tier details below reflect the traditional model, but either way you can try all of these services without spending money up front.

IAM (Identity and Access Management)

IAM controls who can do what in your AWS account. Every person, every application, every service that touches your AWS resources goes through IAM. It's not optional. It's the first thing you configure and the thing that protects everything else.

You create users for people, roles for services, and policies that define exactly what each one is allowed to do. A policy might say "this Lambda function can read from this specific S3 bucket and nothing else." That's the principle of least privilege, and IAM is how you enforce it.

When to use it: You're already using it. Every AWS account has IAM. The question is whether you're using it well. If your app is running with admin-level permissions, fix that. Create specific roles with only the permissions each service actually needs.

Free tier: IAM itself is completely free. You pay for the services it controls, not for IAM itself.

EC2 (Elastic Compute Cloud)

EC2 gives you virtual servers in the cloud. You pick an operating system, choose how much CPU and RAM you want, and you've got a machine running in minutes. It's the most flexible compute option AWS offers because you have full control over the OS, the runtime, the networking, everything.

You'll hear these virtual servers called "instances." They come in dozens of types optimized for different workloads. General purpose instances (the t3 and m7 families) handle most things. Compute-optimized instances (c7) are for CPU-heavy work. Memory-optimized (r7) for big in-memory datasets. The newest generation runs on AWS Graviton4 chips (the 8g instance families like M8g, C8g, R8g), which are up to 30% faster and cheaper than the previous generation.

When to use it: When you need full control of the server. Hosting a web app, running a background worker, batch processing, machine learning training. If your workload doesn't fit neatly into a serverless function or a container, EC2 is the answer.

Free tier: 750 hours per month of t2.micro or t3.micro instances for 12 months. That's enough to run one small instance 24/7 for free.

S3 (Simple Storage Service)

S3 stores files. Any kind of file, any size, basically unlimited storage. You create "buckets" and put objects in them. An object is a file plus some metadata. That's it.

Nearly every AWS application touches S3 at some point. Static website hosting, image uploads, log storage, data lake, backup destination, ML training data. It's one of the oldest AWS services (launched in 2006) and one of the most reliable. S3 is designed for 99.999999999% durability. That's eleven nines. Your files aren't going anywhere.

S3 has storage classes for different access patterns. Standard is for frequently accessed data. Infrequent Access costs less per GB but charges you for retrieval. Glacier is dirt cheap storage for archives you rarely touch. Intelligent-Tiering automatically moves objects between classes based on how often you access them, so you don't have to think about it.

When to use it: Storing anything. Seriously. User uploads, static assets, backups, logs, data exports. If you're generating files or receiving files, they probably belong in S3.

Free tier: 5 GB of Standard storage, 20,000 GET requests, and 2,000 PUT requests per month for 12 months.

RDS (Relational Database Service)

RDS is a managed relational database. You pick your engine (PostgreSQL, MySQL, MariaDB, Oracle, or SQL Server), choose your instance size, and AWS handles the rest. Patching, backups, failover, replication. The stuff that makes running your own database server a full-time job.

Then there's Aurora, which is Amazon's own database engine. It's compatible with PostgreSQL and MySQL but built for the cloud from the ground up. It's faster (Amazon claims up to 5x faster than standard MySQL) and automatically replicates your data across three availability zones. Aurora Serverless scales the database up and down based on demand, so you're not paying for a big instance during off-hours.

When to use it: When your application needs a relational database. If you're building a Rails app, a Django app, a Spring Boot API, anything that talks SQL, use RDS. Pick Aurora if you want the best performance and don't mind being locked into the AWS ecosystem a bit.

Free tier: 750 hours per month of a db.t3.micro or db.t4g.micro instance and 20 GB of storage for 12 months.

DynamoDB

DynamoDB is a fully managed NoSQL database. It stores data as key-value pairs or documents (JSON). There's no server to manage, no patches, no capacity planning in the traditional sense. You create a table, define a primary key, and start reading and writing data.

The big selling point is performance at scale. DynamoDB delivers single-digit millisecond response times regardless of table size. It handles millions of requests per second without you touching any configuration. It also supports Global Tables for automatic cross-region replication if you need your data available worldwide.

The tradeoff is flexibility. You need to design your data model around your access patterns up front. You can't just slap an index on a column later like you would in PostgreSQL. If you get the data model right, DynamoDB is incredibly fast and cheap. If you get it wrong, you'll fight it constantly.

When to use it: High-throughput, low-latency workloads where you know your access patterns ahead of time. Session stores, user profiles, game state, IoT data, shopping carts. If you're building something that needs to scale to millions of users and your data model fits key-value lookups, DynamoDB is the move.

Free tier: 25 GB of storage and enough read/write capacity for about 200 million requests per month. Permanently free, not just 12 months.

Lambda

Lambda lets you run code without managing servers. You write a function, upload it to Lambda, and it runs whenever something triggers it. An HTTP request, a file landing in S3, a message hitting a queue, a scheduled timer. Lambda handles the scaling. If you get one request, it runs one copy. If you get ten thousand simultaneous requests, it runs ten thousand copies.

You pay per execution and per millisecond of compute time. If your function doesn't run, you pay nothing. For workloads that are bursty or event-driven, this is dramatically cheaper than keeping an EC2 instance running 24/7.

Lambda supports Python, Node.js, Java, Go, .NET, Ruby, and custom runtimes. Lambda SnapStart significantly reduces cold-start latency for Java 11+, Python 3.12+, and .NET 8+ functions. Functions can run for up to 15 minutes and use up to 10 GB of memory.

When to use it: Event-driven workloads. Processing an image after upload, handling webhook callbacks, running scheduled tasks, building API backends with API Gateway. If your work happens in short bursts rather than continuous processing, Lambda is probably the cheapest and simplest option.

Free tier: 1 million requests and 400,000 GB-seconds of compute time per month. Permanently free.

API Gateway

API Gateway sits in front of your backend and manages HTTP traffic. You define your API endpoints, connect them to Lambda functions (or EC2, or any HTTP backend), and API Gateway handles authentication, throttling, request validation, and CORS.

It comes in two flavors. HTTP APIs are simpler and cheaper, good for most use cases. REST APIs have more features like request/response transformation, usage plans, and API keys if you need them.

The typical pattern is API Gateway plus Lambda. You get a fully serverless API where you pay nothing when there's no traffic. API Gateway handles the routing, Lambda handles the logic.

When to use it: When you're building an API and want managed infrastructure. Especially powerful paired with Lambda for serverless backends. Also great when you need authentication, rate limiting, or usage tracking without building it yourself.

Free tier: 1 million REST API calls or 1 million HTTP API calls per month for 12 months.

CloudFront

CloudFront is a CDN (Content Delivery Network). It caches your content at edge locations around the world so users get faster response times. Instead of every request traveling to your server in Virginia, CloudFront serves it from a location near the user.

You can put CloudFront in front of S3 buckets, EC2 instances, load balancers, or API Gateway. It handles HTTPS certificates automatically through AWS Certificate Manager. Data transfer from AWS services to CloudFront is free, which is a big deal because data transfer is usually the sneaky expensive part of AWS.

When to use it: Serving static assets (images, CSS, JavaScript), speeding up API responses, or distributing video content. If your users are spread across different regions and you care about load times, put CloudFront in front of your origin.

Free tier: 1 TB of data transfer out and 10 million HTTP/HTTPS requests per month. Permanently free.

Route 53

Route 53 is DNS. It translates domain names (like yourapp.com) into IP addresses that computers understand. You can also register domains directly through Route 53.

Beyond basic DNS, Route 53 supports routing policies. Latency-based routing sends users to the closest region. Weighted routing splits traffic between multiple endpoints (useful for blue-green deploys). Failover routing automatically redirects traffic if a health check fails.

One nice cost trick: if you use Alias records to point to AWS resources (like CloudFront, load balancers, or S3), the DNS queries are free. Regular CNAME records cost $0.40 per million queries. Alias records cost nothing.

When to use it: When you have a domain name. That's basically everyone. Route 53 ties your domain to your infrastructure and gives you routing control that your registrar probably can't match.

Free tier: No free tier for hosted zones ($0.50/month per zone), but Alias queries to AWS resources are free.

SQS (Simple Queue Service)

SQS is a message queue. You put messages in, something else pulls them out and processes them. The messages wait in the queue until a consumer is ready for them.

This is how you decouple parts of your application. Instead of your web server directly calling a slow process (like sending an email or generating a report), it drops a message on a queue and moves on. A background worker picks up the message and handles it independently. If the worker is busy or down, the messages just pile up in the queue and get processed when it's ready.

SQS has two types. Standard queues deliver messages at least once and don't guarantee order. FIFO queues guarantee exactly-once delivery and strict ordering, but handle fewer messages per second.

When to use it: Decoupling components, handling background jobs, buffering traffic spikes. Any time you want to say "process this later" instead of "process this now," SQS is the tool.

Free tier: 1 million requests per month. Permanently free.

SNS (Simple Notification Service)

SNS is pub/sub messaging. You create a "topic," publish a message to it, and every subscriber gets a copy. Subscribers can be SQS queues, Lambda functions, HTTP endpoints, email addresses, or SMS numbers.

The classic pattern is SNS plus SQS for fan-out. One event (like "a new order was placed") publishes to an SNS topic. Three different SQS queues subscribe: one triggers inventory updates, one sends a confirmation email, one updates analytics. One event, three independent reactions.

When to use it: When one event needs to trigger multiple things. Notifications, fan-out processing, alerting. If you're using CloudWatch alarms, SNS is usually what sends you the alert.

Free tier: 1 million publishes and 100,000 HTTP/S deliveries per month. Permanently free.

CloudWatch

CloudWatch is monitoring and observability. It collects metrics, logs, and events from your AWS resources and applications. Every AWS service automatically sends basic metrics to CloudWatch. CPU usage on EC2, request count on API Gateway, error rate on Lambda. It's already collecting data. You just need to look at it.

You create alarms that watch a metric and trigger an action when it crosses a threshold. CPU above 80%? Auto-scale. Error rate above 5%? Send an SNS notification to the on-call channel. Lambda duration above 10 seconds? Investigate.

CloudWatch Logs stores log output from Lambda functions, ECS containers, EC2 instances, and more. Log Insights lets you query those logs with a SQL-like syntax to find patterns and debug issues.

When to use it: Always. Every production workload should have CloudWatch alarms for the metrics that matter. Set up dashboards for visibility, alarms for things that need attention, and log groups for debugging. It's the first place you look when something breaks.

Free tier: 10 custom metrics, 10 alarms, 1 million API requests, 5 GB of log data ingestion per month.

ECS and Fargate (Elastic Container Service)

ECS runs Docker containers on AWS. You define your container image, how much CPU and memory it needs, and how many copies to run. ECS handles placing those containers on infrastructure and keeping them running.

Fargate is the serverless option for ECS. Instead of managing EC2 instances to run your containers on, Fargate handles the underlying servers. You just define the container and its resources. Fargate provisions the compute, runs the container, and bills you per second for the CPU and memory used.

There's also EKS (Elastic Kubernetes Service) if your team already knows Kubernetes. ECS is simpler and more tightly integrated with AWS. EKS gives you the full Kubernetes experience with all its power and all its complexity.

When to use it: When your application is containerized. If you have a Dockerfile, ECS with Fargate is the easiest path to running it in production. It's a good middle ground between the full control of EC2 and the constraints of Lambda.

Free tier: No direct free tier for ECS/Fargate, but the EC2 free tier applies if you run ECS on EC2 instances.

Elastic Beanstalk

Elastic Beanstalk is the "just deploy my app" service. You give it your code (Node.js, Python, Java, Ruby, Go, .NET, PHP, or Docker), and it sets up everything: EC2 instances, load balancers, auto-scaling, health monitoring. You don't configure any of it unless you want to.

It's like Heroku, but on AWS. You push code, it deploys. Under the hood, it's creating real AWS resources that you can see and modify if you need to. You're not locked into an abstraction you can't escape from. If you outgrow Beanstalk, all your resources are still there. You just start managing them directly.

When to use it: When you want to get a web app running on AWS fast and you don't want to think about infrastructure. Great for prototypes, side projects, or teams that want AWS's scale without AWS's complexity. You can always graduate to managing EC2 or ECS directly later.

Free tier: Elastic Beanstalk itself is free. You only pay for the underlying resources (EC2, S3, load balancers, etc.), which can fall under their respective free tiers.

How They All Fit Together

Here's a common setup you'll see in the real world. A React frontend sits in an S3 bucket, served globally through CloudFront. Route 53 points the domain to CloudFront. The API is built with API Gateway and Lambda functions, reading and writing to DynamoDB or RDS. User uploads go straight to S3. When something important happens (new order, user signup), an SNS topic notifies multiple SQS queues that trigger different workflows. CloudWatch monitors everything and pages the team through SNS when something breaks. IAM makes sure each piece can only access what it needs.

That entire stack uses nine services. Nine out of 200+. And it handles everything from a hobby project to a production app serving millions of users.

Start with what you need. Most apps begin with just EC2 or Lambda, S3, and a database. Add the rest as your requirements grow. The 80/20 rule holds: a handful of services covers the vast majority of what you'll build.

Top comments (0)