Day 15 of my Terraform journey was about moving from basic provider usage into more advanced provider patterns.
This was the day where Terraform started to feel much more like a real infrastructure orchestration tool rather than just a way to create isolated cloud resources.
The main focus was learning how to:
- build modules that accept provider configurations from their callers
- use multiple providers in one Terraform project
- manage Docker locally with Terraform
- prepare an AWS EKS + Kubernetes deployment using Terraform
GitHub reference:
π Github Link
Why This Topic Matters
In real infrastructure, one provider configuration is often not enough.
You may need:
- one AWS region for primary infrastructure
- another AWS region for replicas
- a separate AWS account for production
- Kubernetes to deploy workloads after AWS creates the cluster
- Docker locally for quick testing before cloud deployment
That means the real question is no longer just:
βHow do I use a provider?β
It becomes:
βHow do I pass the right provider configuration into the right part of my Terraform code?β
That is what Day 15 was really about.
The Core Rule: Modules Should Not Define Their Own Providers
This was the biggest lesson of the day.
A reusable module should not hardcode provider blocks inside itself.
Why?
Because a reusable module should not decide:
- which region to deploy into
- which account to authenticate to
- which alias to use
- which credentials or access path the caller should depend on
Those decisions belong to the root module.
If the child module defines its own providers, it becomes harder to:
- reuse it across regions
- reuse it across accounts
- test it cleanly
- compose it into larger infrastructure setups
So the better pattern is:
- the child module declares which providers it expects
- the root module creates the real provider configurations
- the root module passes them in explicitly
The configuration_aliases Pattern
Inside a reusable module, Terraform needs to know which aliased provider configurations the module expects.
That is where configuration_aliases comes in.
Example:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
configuration_aliases = [aws.primary, aws.replica]
}
}
}
This tells Terraform:
- this module expects two aliased AWS provider configurations
- one named
aws.primary - one named
aws.replica
This is important because the child module is not defining the providers itself.
It is declaring the provider interfaces it expects the caller to supply.
That was the cleanest way for me to understand it:
the module is declaring its provider dependencies, not its provider setup.
Wiring Providers into a Module with the providers Map
Once the child module declares what it expects, the root module passes the actual provider configurations using the providers map.
Example root module:
provider "aws" {
alias = "primary"
region = "us-east-1"
}
provider "aws" {
alias = "replica"
region = "us-west-2"
}
module "multi_region_app" {
source = "../../modules/multi-region-app"
app_name = "my-app"
providers = {
aws.primary = aws.primary
aws.replica = aws.replica
}
}
This part is the key wiring step.
The root module is saying:
- when the child module asks for
aws.primary, give it this provider - when the child module asks for
aws.replica, give it this other provider
That keeps the responsibilities clean:
- child module = reusable logic
- root module = environment-specific provider wiring
My Multi-Region Module Lab
To make the concept practical, I built a reusable module that deploys S3 buckets in two regions.
Inside the child module:
- one bucket uses
provider = aws.primary - the other uses
provider = aws.replica
In the root module:
- the primary AWS provider points to one region
- the replica AWS provider points to another region
- both are passed into the module with the
providersmap
That gave me a very concrete understanding of how Terraform decides where each resource should go.
The important distinction here is:
-
provideris used on individual resources and data sources -
providersis used when calling a module
That is one of the most important details from Day 15.
Docker as a Quick Multi-Provider Win
Before getting into EKS, I used the Docker provider for a simpler live example.
This was a great bridge between theory and the more expensive AWS/Kubernetes setup.
The Terraform configuration used:
- the Docker provider
- a Docker image resource
- a Docker container resource
Example pattern:
provider "docker" {}
resource "docker_image" "nginx" {
name = "nginx:latest"
keep_locally = false
}
resource "docker_container" "nginx" {
image = docker_image.nginx.image_id
name = "terraform-nginx"
ports {
internal = 80
external = 8080
}
}
After applying it, I confirmed nginx was serving locally on:
http://localhost:8080
That was a very useful reminder that Terraform is not just for cloud resources.
It can also manage local platform resources through providers.
In this case:
- Terraform managed Docker resources
- Docker handled the actual container runtime
EKS + Kubernetes: The Real Multi-Provider Pattern
The most advanced part of Day 15 was preparing an EKS + Kubernetes deployment.
This is where Terraform starts using multiple provider types together in a realistic way.
The basic pattern is:
- AWS provider creates the cloud infrastructure
- Kubernetes provider connects to the cluster and deploys workloads
In the EKS lab, Terraform was prepared to create:
- a VPC
- public subnets
- an EKS cluster
- a managed node group
- a Kubernetes namespace
- a Kubernetes Deployment
- a Kubernetes Service of type
LoadBalancer
That is the real Day 15 milestone:
one Terraform project managing both infrastructure and the application platform on top of it.
Why I Stopped at terraform plan for EKS
I prepared the full EKS configuration and validated that Terraform could generate a real plan.
The plan showed that Terraform was ready to create:
- AWS networking
- EKS resources
- node group resources
- Kubernetes workload resources after the cluster became available
However, I intentionally stopped before terraform apply for the EKS part.
Why?
Because EKS is the first part of the challenge that can use credits more noticeably due to:
- EKS control plane charges
- worker nodes
- load balancer resources
- supporting AWS infrastructure
So I made a practical engineering decision:
- complete the Docker lab live
- complete the EKS configuration and plan
- avoid unnecessary credit consumption by not applying the cluster today
I think that still reflects an important real-world skill:
sometimes the right infrastructure decision is not just βcan I deploy it?β
but also βshould I deploy it right now?β
What I Learned
Day 15 made a few things much clearer for me:
1. Root modules should own provider configuration
Reusable modules should stay flexible and let the caller decide region, account, and authentication strategy.
2. configuration_aliases is the missing link
It tells Terraform which aliased providers a module expects.
3. The providers map is how modules get wired
This is what connects root-module providers to child-module expectations.
4. Terraform can orchestrate multiple platforms
A single Terraform project can manage:
- AWS
- Docker
- Kubernetes
That makes Terraform much more powerful than a simple cloud provisioning tool.
5. Docker is a great stepping stone before Kubernetes
It gives a quick, low-cost way to understand provider-based container management before moving into a much larger EKS deployment.
My Main Takeaway
The biggest shift for me today was understanding that multi-provider Terraform is really about separation of responsibilities.
- child modules define reusable infrastructure logic
- root modules define the real provider wiring
- each provider handles a specific platform
- Terraform coordinates the whole thing
Once that clicked, the Day 15 material became much easier to understand.
Full Code
GitHub reference:
π Github Link
Follow My Journey
This is Day 15 of my 30-Day Terraform Challenge.
See you on Day 16 π
Top comments (0)