DEV Community

Mukami
Mukami

Posted on

Deploying Multi-Cloud Infrastructure with Terraform Modules

From S3 Buckets to EKS Clusters — All in One Configuration


Day 15 of the 30-Day Terraform Challenge — and today I learned that Terraform isn't just for AWS. It's for everything.

One configuration. Multiple providers. S3 buckets across regions. Docker containers locally. A full Kubernetes cluster on EKS. All from the same tool.

Here's how it all came together.


Part 1: Multi-Provider Modules

The first challenge: creating a module that works across multiple AWS regions.

Modules can't hardcode providers. That would break reusability. Instead, they must accept provider configurations from the caller.

The module (no provider block inside):

terraform {
  required_providers {
    aws = {
      source                = "hashicorp/aws"
      version               = "~> 5.0"
      configuration_aliases = [aws.primary, aws.replica]
    }
  }
}

resource "aws_s3_bucket" "primary" {
  provider = aws.primary
  bucket   = "${var.app_name}-primary"
}

resource "aws_s3_bucket" "replica" {
  provider = aws.replica
  bucket   = "${var.app_name}-replica"
}
Enter fullscreen mode Exit fullscreen mode

The caller (provides the providers):

provider "aws" {
  alias  = "primary"
  region = "eu-north-1"
}

provider "aws" {
  alias  = "replica"
  region = "eu-west-1"
}

module "multi_region_app" {
  source = "../modules/multi-region-app"

  providers = {
    aws.primary = aws.primary
    aws.replica = aws.replica
  }
}
Enter fullscreen mode Exit fullscreen mode

This pattern is how Terraform scales to global infrastructure.


Part 2: Docker Provider — Local Testing

Before deploying to Kubernetes, I tested locally with Docker:

terraform {
  required_providers {
    docker = {
      source  = "kreuzwerker/docker"
      version = "~> 3.0"
    }
  }
}

resource "docker_image" "nginx" {
  name = "nginx:latest"
}

resource "docker_container" "nginx" {
  image = docker_image.nginx.image_id
  name  = "terraform-nginx"

  ports {
    internal = 80
    external = 8080
  }
}
Enter fullscreen mode Exit fullscreen mode

One terraform apply later:

$ docker ps
CONTAINER ID   IMAGE          COMMAND                  PORTS                  NAMES
2ea179f7333b   nginx:latest   "/docker-entrypoint.…"   0.0.0.0:8080->80/tcp   terraform-nginx

$ curl http://localhost:8080
<!DOCTYPE html>
<html>
<head><title>Welcome to nginx!</title>...
Enter fullscreen mode Exit fullscreen mode

A container running on my machine, provisioned entirely by Terraform. No Docker commands. No manual setup.


Part 3: EKS Cluster — The Big One

This was the most complex deployment yet. An entire Kubernetes cluster on AWS EKS.

VPC first (using community module):

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "~> 5.0"

  name = "eks-vpc"
  cidr = "10.0.0.0/16"

  azs             = ["eu-north-1a", "eu-north-1b", "eu-north-1c"]
  private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
  public_subnets  = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]

  enable_nat_gateway = true
}
Enter fullscreen mode Exit fullscreen mode

Then the EKS cluster:

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~> 20.0"

  cluster_name    = "terraform-challenge-cluster"
  cluster_version = "1.29"

  vpc_id     = module.vpc.vpc_id
  subnet_ids = module.vpc.private_subnets

  eks_managed_node_groups = {
    default = {
      min_size       = 1
      max_size       = 3
      desired_size   = 2
      instance_types = ["t3.small"]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

The Kubernetes provider (authenticates using AWS token):

provider "kubernetes" {
  host                   = module.eks.cluster_endpoint
  cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)

  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    command     = "aws"
    args        = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
  }
}
Enter fullscreen mode Exit fullscreen mode

This exec block runs aws eks get-token to generate a temporary authentication token. No hardcoded credentials.


Part 4: Deploying to Kubernetes

With the cluster running, I deployed nginx:

resource "kubernetes_deployment" "nginx" {
  metadata {
    name = "nginx-deployment"
    labels = { app = "nginx" }
  }

  spec {
    replicas = 2

    selector {
      match_labels = { app = "nginx" }
    }

    template {
      metadata {
        labels = { app = "nginx" }
      }

      spec {
        container {
          image = "nginx:latest"
          name  = "nginx"
          port  { container_port = 80 }
        }
      }
    }
  }
}

resource "kubernetes_service" "nginx" {
  metadata {
    name = "nginx-service"
  }

  spec {
    selector = { app = "nginx" }
    port { port = 80; target_port = 80 }
    type = "LoadBalancer"
  }
}
Enter fullscreen mode Exit fullscreen mode

Part 5: The Moment It Worked

After 8 minutes of cluster provisioning (felt like an eternity), the nodes appeared:

$ kubectl get nodes
NAME                                          STATUS   ROLES    AGE   VERSION
ip-10-0-1-219.eu-north-1.compute.internal     Ready    <none>   21m   v1.29
ip-10-0-2-67.eu-north-1.compute.internal      Ready    <none>   21m   v1.29
Enter fullscreen mode Exit fullscreen mode

Then the nginx pods:

$ kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-xxxxxxxxxx-xxxxx   1/1     Running   0          30s
nginx-xxxxxxxxxx-yyyyy   1/1     Running   0          30s
Enter fullscreen mode Exit fullscreen mode

And finally, the LoadBalancer:

$ kubectl get service nginx
NAME    TYPE           EXTERNAL-IP
nginx   LoadBalancer   a4410db0bc9904a48978a65e7108ee18-2037003514.eu-north-1.elb.amazonaws.com
Enter fullscreen mode Exit fullscreen mode

A publicly accessible nginx server, running on Kubernetes, provisioned entirely by Terraform.


What I Learned

Modules must accept providers. You can't hardcode regions inside a reusable module. Use configuration_aliases and pass providers from the root.

The Docker provider is great for local testing. Before deploying to EKS, I tested the same container image locally. Saved time and money.

EKS takes time. 8-10 minutes for the control plane. Another 2-3 minutes for nodes. Patience is required.

RBAC is the final hurdle. Even with the cluster running, your IAM user needs explicit permissions via Access Entry and policy association.

One tool, many providers. AWS, Docker, Kubernetes — all from the same Terraform configuration.


The Cost Warning

EKS isn't free. A cluster costs ~$0.10/hour plus EC2 nodes (~$0.04/hour each). My 2-hour test cost about $0.50. Always destroy when done.

terraform destroy -auto-approve
Enter fullscreen mode Exit fullscreen mode

The Bottom Line

Today I deployed:

  • S3 buckets in two AWS regions (using provider aliases)
  • A Docker container locally (using Docker provider)
  • A full EKS cluster with 2 nodes (using AWS EKS module)
  • Nginx pods on Kubernetes (using Kubernetes provider)

All from one Terraform configuration.

This is why I love Terraform. One language. One workflow. Every cloud.

Top comments (0)