Introduction
On July 15, 2025 AWS simplified its Free Tier into a credit-based model, offering $200 in credits to new accounts, making it easier than ever to spin up and experiment with real infrastructure at minimal cost. This example setup is intended for testing only and is not recommended for running production workloads. It is especially useful for developers who are new to AWS and Terraform and want a quick environment to start learning. In this configuration:
- Networking and access control are defined as public and SSH is enabled on all IPs.
- IAM roles use broad permissions rather than least-privilege best practices.
- RDS sizing and high availability are minimal.
- Terraform state is assumed to be stored locally without remote locking.
- Cost and load testing should be performed before handling real traffic.
For teams and individual developers looking to validate system architectures before committing to production budgets, this new Free Tier provides a perfect playground. In this article, we explore how to leverage Terraform to provision a fully functional AWS environment, including VPC networking, EKS (Kubernetes) compute, RDS databases, ElastiCache (Redis) clusters, and logging, capable of handling a modest load of around 5,000 users. You’ll learn why Terraform is the ideal tool for repeatable, versioned infrastructure, see best-practice patterns for each component, and discover how this baseline can evolve to support millions of users down the road.
Using AWS Free Tier with Terraform
By pairing Terraform’s infrastructure‑as‑code with AWS’s $200 credits, you can:
- Prototype rapidly: Launch an entire VPC, cluster, database and cache in minutes. With this setup, it typically takes around 20 to 30 minutes. This timing is determined by AWS provisioning and cannot be sped up from your end.
- Stay change‑aware: Preview exactly which resources will be created, modified or destroyed with terraform plan before you apply.
- Iterate safely: Tear down and rebuild environments without manual error, ensuring every change is tracked in Git.
To get started, install the AWS CLI and Terraform, configure your AWS credentials (grant admin role for all services).
Why Terraform?
- Idempotency: Apply the same configuration repeatedly with predictable results.
- Modularity: Break your architecture into reusable modules (VPC, EKS, RDS, ElastiCache) to simplify management.
- Version control: Keep your entire environment in Git; peer‑review changes before they touch actual infrastructure.
-
Planning: Use
terraform plan
to catch unintended changes before resources spin up (and burn credits).
To get up and running in minutes, copy the Terraform files below into your working directory, then run the Terraform commands shown in the final of this article.
Infrastructure Components & Best Practices
1. VPC & Networking
- File: main.tf (VPC, subnets, Internet/NAT gateways, route tables)
-
Best Practices:
- Tag every resource with
Project
andEnvironment
. - Split public/private subnets across at least two AZs for resilience.
- Enable DNS hostnames/support on your VPC for service discovery.
- Tag every resource with
2. EKS Cluster
-
File: eks.tf (using
terraform-aws-modules/eks/aws
) - Explanation: Leverages a battle‑tested module to provision control plane and managed node groups.
-
Best Practices:
- Lock node groups to your home IP initially, then tighten security over time.
- Use auto‑scaling (
min_size
,max_size
) to accommodate load spikes. - Inject IAM roles and OIDC provider for fine‑grained pod permissions.
3. RDS PostgreSQL
- File: rds.tf
-
Explanation: A single
db.t3.micro
instance with encrypted storage, automated backups, and enhanced monitoring. -
Best Practices:
- Enable
performance_insights
for query tuning. - Keep
skip_final_snapshot = true
only in non‑production environments; enable snapshot retention in production. - Group DB subnets and parameter settings in dedicated resources for clarity.
- Enable
4. ElastiCache Redis (Optional - only if your API has Redis)
- File: elasticache.tf
-
Explanation: A single‑node Redis cluster (
cache.t3.micro
) with LRU eviction and key‑space notifications. -
Best Practices:
- Use encryption in transit and at rest.
- Ship slow‑log and engine logs to CloudWatch for visibility.
- Define a subnet group spanning private subnets to isolate cache traffic.
5. Application & Security Groups
- File: app.tf
- Explanation: Security groups opening only necessary ports (app port 3010, SSH for maintenance).
-
Best Practices:
- Favor specific CIDR blocks over “0.0.0.0/0” where possible.
- Group common ingress/egress rules into reusable modules.
6. Logging & Monitoring
- File: elasticache.tf (CloudWatch log group defined alongside the ElastiCache resources)
- Explanation: Captures ElastiCache slow‑log and engine‑log for performance analysis with a 7-day retention
-
Best Practices:
- Align retention settings with compliance requirements
- Forward logs to a centralized monitoring account as your organization grows
- Bind a CloudWatch log group for each major service in its respective Terraform file (for example, add log groups in
rds.tf
,app.tf
andeks.tf
) to ensure comprehensive observability
7. Variables Definition
- File: variables.tf
-
Explanation: Declares all input variables used by your Terraform configurations. Each variable includes a description, type, default value (where appropriate), and a
sensitive
flag for secrets. This file ensures Terraform knows what inputs to expect and provides IDE support, type checking, and documentation. -
Best Practices:
- Group variables by component (for example VPC, RDS, ElastiCache, EKS)
- Provide sensible defaults for common values and override via
.tfvars
files - Mark secrets (such as database passwords) as
sensitive = true
to avoid accidental output - Add
validation
blocks to enforce constraints (for example minimum password length) - Keep descriptions concise and clear so teammates understand each variable’s purpose
- Avoid hard‑coding environment‑specific settings; instead inject them through
.tfvars
, explained below.
Terraform Variables (.tfvars
)
Terraform uses a variables file to inject environment‑specific values without changing your .tf
files. Create a file named terraform.tfvars
alongside your configuration and include the following settings:
aws_region = "us-east-1"
project_name = "project-name"
environment = "development"
vpc_cidr = "10.0.0.0/16"
availability_zones = ["us-east-1a", "us-east-1b"]
public_subnets = ["10.0.1.0/24", "10.0.2.0/24"]
private_subnets = ["10.0.11.0/24", "10.0.12.0/24"]
allowed_ssh_cidr = ["0.0.0.0/0"]
db_instance_class = "db.t3.micro"
db_allocated_storage = 20
db_max_allocated_storage = 100
db_name = "db-name"
db_username = "db-username"
db_password = "your-secure-password-here"
db_backup_retention_period = 7
cache_node_type = "cache.t3.micro"
eks_cluster_name = "name-example-eks"
NOTE: Adjust these values as needed for your environment. The name of the file should be exactly terraform.tfvars
.
Using a .tfvars
file lets you:
- Keep secret or environment‑specific values out of your versioned
.tf
files. - Swap between development, staging and production settings by passing a different
.tfvars
file. - Simplify automation by referencing a single variables file.
Ignoring Terraform Artifacts
You don’t want to add sensitive or unnecessary Terraform files to your GitHub repository. Add the following entries to your .gitignore
, adjusting the paths to match your Terraform folder (for example /src/infrastructure/cloud/terraform
):
# Terraform directories and state
/src/infrastructure/cloud/terraform/.terraform
/src/infrastructure/cloud/terraform/.terraform.lock.hcl
/src/infrastructure/cloud/terraform/.terraform.tfstate.lock.info
# Terraform variable and state files
/src/infrastructure/cloud/terraform/terraform.tfvars
/src/infrastructure/cloud/terraform/terraform.tfstate
/src/infrastructure/cloud/terraform/terraform.tfstate*
# Terraform plan files
/src/infrastructure/cloud/terraform/tfplan
Applying and Managing Terraform
To provision this environment, inside your terraform folder, run:
terraform init
, followed by terraform plan -out=tfplan
and after terraform apply tfplan
, Terraform will then begin provisioning the AWS resources defined in your configuration.
Others useful Terraform commands:
-
terraform fmt
: keeps configuration style consistent -
terraform validate
: checks for syntax errors and missing variables -
terraform destroy
: destroys all resources managed by your Terraform state -
terraform untaint <resource_address>
: marks a resource as tainted so it will be replaced on the next apply. Example:terraform untaint aws_db_instance.main
-
terraform apply -replace="<resource_address>"
: forces replacement of only the specified resource. Example:terraform apply -replace="aws_elasticache_replication_group.main”
-
terraform destroy -target=<resource_address>
: destroys only the specified resource without affecting others. Example:terraform destroy -target=aws_security_group.app
- Full list here.
Conclusion & Scaling to Millions
This Terraform‑driven setup, running on AWS’s new Free Tier credits gives you a turnkey environment capable of supporting roughly 5,000 moderate daily users. To scale toward millions of users, you can:
- EKS: Introduce horizontal pod autoscaling, cluster autoscaler, and multiple node groups (including spot instances).
-
RDS: Migrate from a single
t3.micro
to Amazon Aurora with read replicas for read‑heavy workloads. - ElastiCache: Enable cluster‑mode with sharding and multiple replicas for high availability and throughput.
- Global Reach: Add multi‑region deployments with Route 53 latency routing and database cross‑region replicas.
- CI/CD & Observability: Integrate Terraform Cloud or GitHub Actions for automated drift detection, and plug in Prometheus/Grafana for real‑time metrics.
By treating your AWS environment as code, you maintain the agility to evolve your architecture seamlessly, and with AWS Free Tier credits still available for your initial experiments, there’s no better time to start.
Quick note: If you don't want to burn your credits, you can run terraform destroy
after testing this setup.
You can view my entire Terraform folder for reference here.
Top comments (0)