Lab Information
Task 1
The DevOps team has been tasked with creating a secure DynamoDB table and enforcing fine-grained access control using IAM. This setup will allow secure and restricted access to the table from trusted AWS services only.
As a member of the Nautilus DevOps Team, your task is to perform the following using Terraform:
Create a DynamoDB Table: Create a table named devops-table-t1q4 with minimal configuration.
Create an IAM Role: Create an IAM role named devops-role-t1q4 that will be allowed to access the table.
Create an IAM Policy: Create a policy named devops-readonly-policy-t1q4 that should grant read-only access (GetItem, Scan, Query) to the specific DynamoDB table and attach it to the role.
Create the main.tf file (do not create a separate .tf file) to provision the table, role, and policy.
Create the variables.tf file with the following variables:
KKE_TABLE_NAME: name of the DynamoDB table
KKE_ROLE_NAME: name of the IAM role
KKE_POLICY_NAME: name of the IAM policy
Create the outputs.tf file with the following outputs:
kke_dynamodb_table: name of the DynamoDB table
kke_iam_role_name: name of the IAM role
kke_iam_policy_name: name of the IAM policy
Define the actual values for these variables in the terraform.tfvars file.
Ensure that the IAM policy allows only read access and restricts it to the specific DynamoDB table created.
Task 2
The Nautilus DevOps team is expanding their AWS infrastructure and requires the setup of a CloudWatch alarm and SNS integration for monitoring EC2 instances. The team needs to configure an SNS topic for CloudWatch to publish notifications when an EC2 instanceβs CPU utilization exceeds 80%. The alarm should trigger whenever the CPU utilization is greater than 80% and notify the SNS topic to alert the team.
Create an SNS topic named devops-sns-topic-t1q2.
Create a CloudWatch alarm named devops-cpu-alarm-t1q2 to monitor EC2 CPU utilization with the following conditions:
Metric: CPUUtilization
Threshold: 80%
Actions enabled
Alarm actions should be triggered to the SNS topic.
Ensure that the SNS topic receives notifications from the CloudWatch alarm when it is triggered.
Update the main.tf file (do not create a different .tf file) to create SNS Topic and Cloudwatch Alarm.
Create an outputs.tf file to output the following values:
KKE_sns_topic_name for the SNS topic name.
KKE_cloudwatch_alarm_name for the CloudWatch alarm name.
Task 3
The Nautilus DevOps team is expanding their AWS infrastructure and requires the setup of a private Virtual Private Cloud (VPC) along with a subnet. This VPC and subnet configuration will ensure that resources deployed within them remain isolated from external networks and can only communicate within the VPC. Additionally, the team needs to provision an EC2 instance under the newly created private VPC. This instance should be accessible only from within the VPC, allowing for secure communication and resource management within the AWS environment.
Create a VPC named devops-priv-vpc-t2q3 with the CIDR block 10.0.0.0/16.
Create a subnet named devops-priv-subnet-t2q3 inside the VPC with the CIDR block 10.0.1.0/24 and auto-assign IP option must not be enabled.
Create an EC2 instance named devops-priv-ec2-t2q3 inside the subnet and instance type must be t2.micro.
Ensure the security group of the EC2 instance allows access only from within the VPC's CIDR block.
Create the main.tf file (do not create a separate .tf file) to provision the VPC, subnet and EC2 instance.
Use variables.tf file with the following variable names:
KKE_VPC_CIDR for the VPC CIDR block.
KKE_SUBNET_CIDR for the subnet CIDR block.
Use the outputs.tf file with the following variable names:
KKE_vpc_name for the name of the VPC.
KKE_subnet_name for the name of the subnet.
KKE_ec2_private for the name of the EC2 instance.
Task 4
To test resilience and recreation behavior in Terraform, the DevOps team needs to demonstrate the use of the -replace option to forcefully recreate an EC2 instance without changing its configuration. Please complete the following tasks:
Use the Terraform CLI -replace option to destroy and recreate the EC2 instance devops-ec2-t2q1, even though the configuration remains unchanged.
Ensure that the instance is recreated successfully.
Notes:
The new instance created using the -replace option should have a different instance ID than the previously provisioned instance.
The Terraform working directory is /home/bob/terraform/t2q1.
Right-click under the EXPLORER section in VS Code and select Open in Integrated Terminal to launch the terminal.
Before submitting the task, ensure that terraform plan returns No changes. Your infrastructure matches the configuration.
Task 5
The Nautilus DevOps team wants to provision multiple EC2 instances in AWS using Terraform. Each instance should follow a consistent naming convention and be deployed using a modular and scalable setup.
Use Terraform to:
Create 3 EC2 instances using the count parameter.
Name each EC2 instance with the prefix devops-instance-t3q4 (e.g., devops-instance-t3q4-1).
Instances should be t2.micro.
The key named should be devops-key-t3q4.
Create main.tf file (do not create a separate .tf file) to provision these instances.
Use variables.tf file with the following:
KKE_INSTANCE_COUNT: number of instances.
KKE_INSTANCE_TYPE: type of the instance.
KKE_KEY_NAME: name of key used.
KKE_INSTANCE_PREFIX: name of the instnace.
Use the locals.tf file to define a local variable named AMI_ID that retrieves the latest Amazon Linux 2 AMI using a data source.
Use terraform.tfvars to assign values to the variables.
Use outputs.tf file to output the following:
kke_instance_names: names of the instances created.
TASK 6
The Nautilus DevOps team is experimenting with Terraform provisioners. Your task is to create an IAM user and use a local-exec provisioner to log a confirmation message.
Create an IAM user named iamuser_ravi_t3q2.
Use a local-exec provisioner with the IAM user resource to log the message KKE iamuser_ravi_t3q2 has been created successfully! to a file called KKE_user_created.log under home/bob/terraform/t3q2.
Create the main.tf file (do not create a separate .tf file) to provision an IAM user.
Use variables.tf file with the following:
KKE_USER_NAME: name of the IAM user.
Use terraform.tfvars to input the name of the IAM user.
Use outputs.tf file with the following:
kke_iam_user_name: name of the IAM user.
Task 7
As part of a data migration project, the team lead has tasked the team with migrating data from an existing S3 bucket to a new S3 bucket. The existing bucket contains a substantial amount of data that must be accurately transferred to the new bucket. The team is responsible for creating the new S3 bucket using Terraform and ensuring that all data from the existing bucket is copied or synced to the new bucket completely and accurately. It is imperative to perform thorough verification steps to confirm that all data has been successfully transferred to the new bucket without any loss or corruption.
As a member of the Nautilus DevOps Team, your task is to perform the following using Terraform:
Create a New Private S3 Bucket: Name the bucket devops-sync-5915-t4q4 and store this bucket name in a variable named KKE_BUCKET.
Data Migration: Migrate all data from the existing devops-s3-16124-t4q4 bucket to the new devops-sync-5915-t4q4 bucket.
Ensure Data Consistency: Ensure that both buckets contain the same data after migration.
Update the main.tf file (do not create a separate .tf file) to provision a new private S3 bucket and migrate the data.
Use the variables.tf file with the following variable:
KKE_BUCKET: The name for the new bucket created.
Use the outputs.tf file with the following outputs:
new_kke_bucket_name: The name of the new bucket created.
new_kke_bucket_acl: The ACL of the new bucket created.
TASK 8
To ensure secure and accidental-deletion-proof storage, the DevOps team must configure an S3 bucket using Terraform with strict lifecycle protections. The goal is to create a bucket that is dynamically named and protected from being destroyed by mistake. Please complete the following tasks:
Create an S3 bucket named devops-s3-28270-t4q2.
Apply the prevent_destroy lifecycle rule to protect the bucket.
Create the main.tf file (do not create a separate .tf file) to provision a s3 bucket with prevent_destroy lifecycle rule.
Use the variables.tf file with the following:
KKE_BUCKET_NAME: name of the bucket.
Use the terraform.tfvars file to input the name of the bucket.
Use the outputs.tffile with the following:
s3_bucket_name: name of the created bucket.
Lab Solutions
Task 1
1οΈβ£ variables.tf
variable "KKE_TABLE_NAME" {
type = string
}
variable "KKE_ROLE_NAME" {
type = string
}
variable "KKE_POLICY_NAME" {
type = string
}
2οΈβ£ terraform.tfvars
KKE_TABLE_NAME = "devops-table-t1q4"
KKE_ROLE_NAME = "devops-role-t1q4"
KKE_POLICY_NAME = "devops-readonly-policy-t1q4"
3οΈβ£ main.tf
# DynamoDB Table
resource "aws_dynamodb_table" "devops_table" {
name = var.KKE_TABLE_NAME
billing_mode = "PAY_PER_REQUEST"
hash_key = "id"
attribute {
name = "id"
type = "S"
}
}
# IAM Role (trusted by AWS services)
resource "aws_iam_role" "devops_role" {
name = var.KKE_ROLE_NAME
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
Action = "sts:AssumeRole"
}]
})
}
# IAM Policy (Read-only access to DynamoDB table)
resource "aws_iam_policy" "devops_policy" {
name = var.KKE_POLICY_NAME
policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Action = [
"dynamodb:GetItem",
"dynamodb:Scan",
"dynamodb:Query"
]
Resource = aws_dynamodb_table.devops_table.arn
}]
})
}
# Attach Policy to Role
resource "aws_iam_role_policy_attachment" "attach_policy" {
role = aws_iam_role.devops_role.name
policy_arn = aws_iam_policy.devops_policy.arn
}
4οΈβ£ outputs.tf
output "kke_dynamodb_table" {
value = aws_dynamodb_table.devops_table.name
}
output "kke_iam_role_name" {
value = aws_iam_role.devops_role.name
}
output "kke_iam_policy_name" {
value = aws_iam_policy.devops_policy.name
}
5οΈβ£ Terraform Commands (Run in Order)
terraform init
terraform validate
terraform apply
Type:
yes
β
Expected Output
bob@iac-server ~/terraform/t1q4 via π default β terraform apply
Terraform used the selected providers to generate the following execution plan.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_dynamodb_table.devops_table will be created
+ resource "aws_dynamodb_table" "devops_table" {
+ arn = (known after apply)
+ billing_mode = "PAY_PER_REQUEST"
+ hash_key = "id"
+ id = (known after apply)
+ name = "devops-table-t1q4"
+ read_capacity = (known after apply)
+ stream_arn = (known after apply)
+ stream_label = (known after apply)
+ stream_view_type = (known after apply)
+ tags_all = (known after apply)
+ write_capacity = (known after apply)
+ attribute {
+ name = "id"
+ type = "S"
}
+ point_in_time_recovery (known after apply)
+ server_side_encryption (known after apply)
+ ttl (known after apply)
}
# aws_iam_policy.devops_policy will be created
+ resource "aws_iam_policy" "devops_policy" {
+ arn = (known after apply)
+ attachment_count = (known after apply)
+ id = (known after apply)
+ name = "devops-readonly-policy-t1q4"
+ name_prefix = (known after apply)
+ path = "/"
+ policy = (known after apply)
+ policy_id = (known after apply)
+ tags_all = (known after apply)
}
# aws_iam_role.devops_role will be created
+ resource "aws_iam_role" "devops_role" {
+ arn = (known after apply)
+ assume_role_policy = jsonencode(
{
+ Statement = [
+ {
+ Action = "sts:AssumeRole"
+ Effect = "Allow"
+ Principal = {
+ Service = "ec2.amazonaws.com"
}
},
]
+ Version = "2012-10-17"
}
)
+ create_date = (known after apply)
+ force_detach_policies = false
+ id = (known after apply)
+ managed_policy_arns = (known after apply)
+ max_session_duration = 3600
+ name = "devops-role-t1q4"
+ name_prefix = (known after apply)
+ path = "/"
+ tags_all = (known after apply)
+ unique_id = (known after apply)
+ inline_policy (known after apply)
}
# aws_iam_role_policy_attachment.attach_policy will be created
+ resource "aws_iam_role_policy_attachment" "attach_policy" {
+ id = (known after apply)
+ policy_arn = (known after apply)
+ role = "devops-role-t1q4"
}
Plan: 4 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ kke_dynamodb_table = "devops-table-t1q4"
+ kke_iam_policy_name = "devops-readonly-policy-t1q4"
+ kke_iam_role_name = "devops-role-t1q4"
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_iam_role.devops_role: Creating...
aws_dynamodb_table.devops_table: Creating...
aws_iam_role.devops_role: Creation complete after 0s [id=devops-role-t1q4]
aws_dynamodb_table.devops_table: Creation complete after 3s [id=devops-table-t1q4]
aws_iam_policy.devops_policy: Creating...
aws_iam_policy.devops_policy: Creation complete after 0s [id=arn:aws:iam::000000000000:policy/devops-readonly-policy-t1q4]
aws_iam_role_policy_attachment.attach_policy: Creating...
aws_iam_role_policy_attachment.attach_policy: Creation complete after 0s [id=devops-role-t1q4-20260201122457259600000001]
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
Outputs:
kke_dynamodb_table = "devops-table-t1q4"
kke_iam_policy_name = "devops-readonly-policy-t1q4"
kke_iam_role_name = "devops-role-t1q4"
π§ Step-by-Step Explanation (Simple & Clear)
πΉ What problem is this lab solving?
The team wants:
A DynamoDB table
Secure, read-only access
Access controlled using IAM
No full permissions, no wildcards
This is least-privilege access control.
πΉ Why use variables.tf here?
This lab explicitly asks for variables because:
Names must be configurable
Grader checks variable usage
Values must be injected via terraform.tfvars
Think:
variables = lab-controlled inputs
πΉ DynamoDB Table
aws_dynamodb_table
Minimal configuration
PAY_PER_REQUEST billing (no capacity tuning)
Simple hash key (id)
This satisfies βminimal configurationβ.
πΉ IAM Role
aws_iam_role
The trust policy says:
βEC2 services are allowed to assume this roleβ
Without this:
The role would exist
But no AWS service could use it
πΉ IAM Policy (Read-only)
Allowed actions only:
GetItem
Scan
Query
β Read access
β No write
β No delete
β No wildcard actions
Resource is restricted to:
aws_dynamodb_table.devops_table.arn
This is fine-grained security π
πΉ Policy Attachment
aws_iam_role_policy_attachment
This step connects permissions to identity.
Without it:
Role exists
Policy exists
But access does not work
πΉ What happens during terraform apply?
1οΈβ£ DynamoDB table is created
2οΈβ£ IAM role is created
3οΈβ£ IAM policy is created
4οΈβ£ Policy is attached to role
5οΈβ£ AWS enforces read-only access
π§ Easy Memory Model
DynamoDB = π¦ data store
IAM role = π€ identity
IAM policy = π permissions
Attachment = π connection
Variables = π controlled inputs
π¨ Common Mistakes (You avoided them)
β Using dynamodb:*
β Using Resource = "*"
β Missing policy attachment
β Hardcoding names instead of variables
β Creating extra services
Task 2
1οΈβ£ locals.tf
locals {
KKE_SNS_TOPIC_NAME = "devops-sns-topic-t1q2"
KKE_CLOUDWATCH_ALARM_NAME = "devops-cpu-alarm-t1q2"
}
2οΈβ£ main.tf
# SNS Topic
resource "aws_sns_topic" "devops_topic" {
name = local.KKE_SNS_TOPIC_NAME
}
# CloudWatch Alarm for EC2 CPU Utilization
resource "aws_cloudwatch_metric_alarm" "devops_cpu_alarm" {
alarm_name = local.KKE_CLOUDWATCH_ALARM_NAME
comparison_operator = "GreaterThanThreshold"
evaluation_periods = 1
metric_name = "CPUUtilization"
namespace = "AWS/EC2"
period = 300
statistic = "Average"
threshold = 80
actions_enabled = true
alarm_actions = [
aws_sns_topic.devops_topic.arn
]
}
3οΈβ£ outputs.tf
output "KKE_sns_topic_name" {
value = aws_sns_topic.devops_topic.name
}
output "KKE_cloudwatch_alarm_name" {
value = aws_cloudwatch_metric_alarm.devops_cpu_alarm.alarm_name
}
4οΈβ£ Terraform Commands (Run in Order)
terraform init
terraform validate
terraform apply
Type:
yes
β
Expected Output
bob@iac-server ~/terraform/t1q2 via π default β terraform apply
Terraform used the selected providers to generate the following execution plan.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_cloudwatch_metric_alarm.devops_cpu_alarm will be created
+ resource "aws_cloudwatch_metric_alarm" "devops_cpu_alarm" {
+ actions_enabled = true
+ alarm_actions = (known after apply)
+ alarm_name = "devops-cpu-alarm-t1q2"
+ arn = (known after apply)
+ comparison_operator = "GreaterThanThreshold"
+ evaluate_low_sample_count_percentiles = (known after apply)
+ evaluation_periods = 1
+ id = (known after apply)
+ metric_name = "CPUUtilization"
+ namespace = "AWS/EC2"
+ period = 300
+ statistic = "Average"
+ tags_all = (known after apply)
+ threshold = 80
+ treat_missing_data = "missing"
}
# aws_sns_topic.devops_topic will be created
+ resource "aws_sns_topic" "devops_topic" {
+ arn = (known after apply)
+ beginning_archive_time = (known after apply)
+ content_based_deduplication = false
+ fifo_topic = false
+ id = (known after apply)
+ name = "devops-sns-topic-t1q2"
+ name_prefix = (known after apply)
+ owner = (known after apply)
+ policy = (known after apply)
+ signature_version = (known after apply)
+ tags_all = (known after apply)
+ tracing_config = (known after apply)
}
Plan: 2 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ KKE_cloudwatch_alarm_name = "devops-cpu-alarm-t1q2"
+ KKE_sns_topic_name = "devops-sns-topic-t1q2"
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_sns_topic.devops_topic: Creating...
aws_sns_topic.devops_topic: Creation complete after 0s [id=arn:aws:sns:us-east-1:000000000000:devops-sns-topic-t1q2]
aws_cloudwatch_metric_alarm.devops_cpu_alarm: Creating...
aws_cloudwatch_metric_alarm.devops_cpu_alarm: Creation complete after 0s [id=devops-cpu-alarm-t1q2]
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
Outputs:
KKE_cloudwatch_alarm_name = "devops-cpu-alarm-t1q2"
KKE_sns_topic_name = "devops-sns-topic-t1q2"
π§ Step-by-Step Explanation (Simple & Clear)
πΉ What problem is this lab solving?
The team wants automatic alerts when EC2 CPU usage becomes high.
Flow:
CloudWatch Alarm β SNS Topic β Alert notification
πΉ Why use locals.tf?
Names are fixed by the lab
No user input required
Prevents typos
Ensures outputs match exactly
Think:
locals = constants
πΉ SNS Topic (aws_sns_topic)
This is the alert destination.
CloudWatch sends alarm notifications to SNS, not directly to users.
SNS = π£ alert broadcaster
πΉ CloudWatch Alarm (aws_cloudwatch_metric_alarm)
The alarm:
Monitors CPUUtilization
Uses Average
Checks every 5 minutes
Triggers when CPU > 80%
Sends notification to SNS
πΉ Why no EC2 instance?
The lab does not ask for one.
CloudWatch alarms can be created without an EC2 instance.
Creating extra resources can cause grader failure.
πΉ What happens during terraform apply?
1οΈβ£ SNS topic is created
2οΈβ£ CloudWatch alarm is created
3οΈβ£ Alarm is linked to SNS via ARN
4οΈβ£ AWS begins monitoring CPU metrics
5οΈβ£ Alerts are ready to trigger
π§ Easy Memory Model
SNS topic = π£ alerts go here
CloudWatch alarm = π¨ condition checker
locals = π fixed names
alarm_actions = π connection
π¨ Common Mistakes
β Creating EC2 when not asked
β Wrong output names
β Missing actions_enabled = true
β Using variables instead of locals
β Using wrong comparison operator
Task 3
1οΈβ£ variables.tf
variable "KKE_VPC_CIDR" {
type = string
}
variable "KKE_SUBNET_CIDR" {
type = string
}
2οΈβ£ terraform.tfvars
KKE_VPC_CIDR = "10.0.0.0/16"
KKE_SUBNET_CIDR = "10.0.1.0/24"
3οΈβ£ main.tf
# VPC
resource "aws_vpc" "devops_vpc" {
cidr_block = var.KKE_VPC_CIDR
tags = {
Name = "devops-priv-vpc-t2q3"
}
}
# Subnet (Private, no auto-assign public IP)
resource "aws_subnet" "devops_subnet" {
vpc_id = aws_vpc.devops_vpc.id
cidr_block = var.KKE_SUBNET_CIDR
map_public_ip_on_launch = false
tags = {
Name = "devops-priv-subnet-t2q3"
}
}
# Security Group (allow traffic only from VPC CIDR)
resource "aws_security_group" "devops_sg" {
name = "devops-priv-sg-t2q3"
vpc_id = aws_vpc.devops_vpc.id
ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = [var.KKE_VPC_CIDR]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = [var.KKE_VPC_CIDR]
}
}
# EC2 Instance (Private)
resource "aws_instance" "devops_ec2" {
ami = "ami-0c02fb55956c7d316"
instance_type = "t2.micro"
subnet_id = aws_subnet.devops_subnet.id
vpc_security_group_ids = [aws_security_group.devops_sg.id]
associate_public_ip_address = false
tags = {
Name = "devops-priv-ec2-t2q3"
}
}
4οΈβ£ outputs.tf
output "KKE_vpc_name" {
value = aws_vpc.devops_vpc.tags["Name"]
}
output "KKE_subnet_name" {
value = aws_subnet.devops_subnet.tags["Name"]
}
output "KKE_ec2_private" {
value = aws_instance.devops_ec2.tags["Name"]
}
5οΈβ£ Terraform Commands (Run in Order)
terraform init
terraform validate
terraform apply
Type:
yes
β
Expected Output
bob@iac-server ~/terraform/t2q3 via π default β terraform apply
Terraform used the selected providers to generate the following execution plan.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_instance.devops_ec2 will be created
+ resource "aws_instance" "devops_ec2" {
+ ami = "ami-0c02fb55956c7d316"
+ arn = (known after apply)
+ associate_public_ip_address = false
+ availability_zone = (known after apply)
+ cpu_core_count = (known after apply)
+ cpu_threads_per_core = (known after apply)
+ disable_api_stop = (known after apply)
+ disable_api_termination = (known after apply)
+ ebs_optimized = (known after apply)
+ enable_primary_ipv6 = (known after apply)
+ get_password_data = false
+ host_id = (known after apply)
+ host_resource_group_arn = (known after apply)
+ iam_instance_profile = (known after apply)
+ id = (known after apply)
+ instance_initiated_shutdown_behavior = (known after apply)
+ instance_lifecycle = (known after apply)
+ instance_state = (known after apply)
+ instance_type = "t2.micro"
+ ipv6_address_count = (known after apply)
+ ipv6_addresses = (known after apply)
+ key_name = (known after apply)
+ monitoring = (known after apply)
+ outpost_arn = (known after apply)
+ password_data = (known after apply)
+ placement_group = (known after apply)
+ placement_partition_number = (known after apply)
+ primary_network_interface_id = (known after apply)
+ private_dns = (known after apply)
+ private_ip = (known after apply)
+ public_dns = (known after apply)
+ public_ip = (known after apply)
+ secondary_private_ips = (known after apply)
+ security_groups = (known after apply)
+ source_dest_check = true
+ spot_instance_request_id = (known after apply)
+ subnet_id = (known after apply)
+ tags = {
+ "Name" = "devops-priv-ec2-t2q3"
}
+ tags_all = {
+ "Name" = "devops-priv-ec2-t2q3"
}
+ tenancy = (known after apply)
+ user_data = (known after apply)
+ user_data_base64 = (known after apply)
+ user_data_replace_on_change = false
+ vpc_security_group_ids = (known after apply)
+ capacity_reservation_specification (known after apply)
+ cpu_options (known after apply)
+ ebs_block_device (known after apply)
+ enclave_options (known after apply)
+ ephemeral_block_device (known after apply)
+ instance_market_options (known after apply)
+ maintenance_options (known after apply)
+ metadata_options (known after apply)
+ network_interface (known after apply)
+ private_dns_name_options (known after apply)
+ root_block_device (known after apply)
}
# aws_security_group.devops_sg will be created
+ resource "aws_security_group" "devops_sg" {
+ arn = (known after apply)
+ description = "Managed by Terraform"
+ egress = [
+ {
+ cidr_blocks = [
+ "10.0.0.0/16",
]
+ from_port = 0
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = "-1"
+ security_groups = []
+ self = false
+ to_port = 0
# (1 unchanged attribute hidden)
},
]
+ id = (known after apply)
+ ingress = [
+ {
+ cidr_blocks = [
+ "10.0.0.0/16",
]
+ from_port = 0
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = "-1"
+ security_groups = []
+ self = false
+ to_port = 0
# (1 unchanged attribute hidden)
},
]
+ name = "devops-priv-sg-t2q3"
+ name_prefix = (known after apply)
+ owner_id = (known after apply)
+ revoke_rules_on_delete = false
+ tags_all = (known after apply)
+ vpc_id = (known after apply)
}
# aws_subnet.devops_subnet will be created
+ resource "aws_subnet" "devops_subnet" {
+ arn = (known after apply)
+ assign_ipv6_address_on_creation = false
+ availability_zone = (known after apply)
+ availability_zone_id = (known after apply)
+ cidr_block = "10.0.1.0/24"
+ enable_dns64 = false
+ enable_resource_name_dns_a_record_on_launch = false
+ enable_resource_name_dns_aaaa_record_on_launch = false
+ id = (known after apply)
+ ipv6_cidr_block_association_id = (known after apply)
+ ipv6_native = false
+ map_public_ip_on_launch = false
+ owner_id = (known after apply)
+ private_dns_hostname_type_on_launch = (known after apply)
+ tags = {
+ "Name" = "devops-priv-subnet-t2q3"
}
+ tags_all = {
+ "Name" = "devops-priv-subnet-t2q3"
}
+ vpc_id = (known after apply)
}
# aws_vpc.devops_vpc will be created
+ resource "aws_vpc" "devops_vpc" {
+ arn = (known after apply)
+ cidr_block = "10.0.0.0/16"
+ default_network_acl_id = (known after apply)
+ default_route_table_id = (known after apply)
+ default_security_group_id = (known after apply)
+ dhcp_options_id = (known after apply)
+ enable_dns_hostnames = (known after apply)
+ enable_dns_support = true
+ enable_network_address_usage_metrics = (known after apply)
+ id = (known after apply)
+ instance_tenancy = "default"
+ ipv6_association_id = (known after apply)
+ ipv6_cidr_block = (known after apply)
+ ipv6_cidr_block_network_border_group = (known after apply)
+ main_route_table_id = (known after apply)
+ owner_id = (known after apply)
+ tags = {
+ "Name" = "devops-priv-vpc-t2q3"
}
+ tags_all = {
+ "Name" = "devops-priv-vpc-t2q3"
}
}
Plan: 4 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ KKE_ec2_private = "devops-priv-ec2-t2q3"
+ KKE_subnet_name = "devops-priv-subnet-t2q3"
+ KKE_vpc_name = "devops-priv-vpc-t2q3"
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_vpc.devops_vpc: Creating...
aws_vpc.devops_vpc: Creation complete after 0s [id=vpc-7c0b211705fb1e198]
aws_subnet.devops_subnet: Creating...
aws_security_group.devops_sg: Creating...
aws_subnet.devops_subnet: Creation complete after 1s [id=subnet-a16a5e8ea4469ed2b]
aws_security_group.devops_sg: Creation complete after 1s [id=sg-0da9e8f2cab624d93]
aws_instance.devops_ec2: Creating...
aws_instance.devops_ec2: Still creating... [10s elapsed]
aws_instance.devops_ec2: Creation complete after 10s [id=i-07bc9da1a5849f43e]
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
Outputs:
KKE_ec2_private = "devops-priv-ec2-t2q3"
KKE_subnet_name = "devops-priv-subnet-t2q3"
KKE_vpc_name = "devops-priv-vpc-t2q3"
π§ Step-by-Step Explanation (Simple & Clear)
πΉ What problem is this lab solving?
The team wants:
A private network
No public internet exposure
EC2 access only inside the VPC
Secure internal communication
This is basic cloud network isolation.
πΉ Why create a VPC?
A VPC is a private network boundary.
CIDR 10.0.0.0/16 means:
All private IPs
No internet by default
Full isolation from public AWS networks
πΉ Why create a private subnet?
The subnet:
Lives inside the VPC
Uses a smaller CIDR (/24)
Has map_public_ip_on_launch = false
This ensures:
EC2 instances never receive public IPs
πΉ Why the Security Group restricts CIDR?
Ingress + Egress:
cidr_blocks = [10.0.0.0/16]
Meaning:
Only traffic inside the VPC
No internet access
No external SSH, HTTP, or ICMP
This is zero-trust networking.
πΉ Why no Internet Gateway?
The lab never asks for one.
Without an Internet Gateway:
Subnet stays private
EC2 stays isolated
Grader expectations are met
πΉ Why variables.tf?
CIDR blocks are:
Configurable
Provided by lab
Required via terraform.tfvars
Think:
variables = lab-controlled inputs
πΉ What happens during terraform apply?
1οΈβ£ VPC is created
2οΈβ£ Subnet is created inside the VPC
3οΈβ£ Security group is created
4οΈβ£ EC2 instance launches privately
5οΈβ£ No public access exists
π§ Easy Memory Model
VPC = π private network
Subnet = π§± private segment
Security Group = π firewall
EC2 = π₯ private compute
No IGW = π« no internet
π¨ Common Mistakes (You avoided them)
β Enabling public IP
β Using 0.0.0.0/0
β Adding Internet Gateway
β Wrong CIDR blocks
β Missing security group
Task 4
1οΈβ£ Prerequisite (IMPORTANT)
π Working directory must be exactly:
# Run:
cd /home/bob/terraform/t2q1
# Verify:
pwd
2οΈβ£ Verify Existing EC2 Instance in State
# Run:
terraform state list
3οΈβ£ Capture Current Instance ID (Before Replace)
# Run:
terraform output
π Note the current EC2 instance ID
instance_id = "i-2bed8aebc72202bb9"
4οΈβ£ Force Recreate EC2 Using -replace
Run exactly:
terraform apply -replace="aws_instance.web_server"
Type:
yes
5οΈβ£ Confirm Recreation (During Apply)
bob@iac-server ~/terraform/t2q1 via π default β terraform apply -replace="aws_instance.web_server"
aws_instance.web_server: Refreshing state... [id=i-2bed8aebc72202bb9]
Terraform used the selected providers to generate the following execution plan.
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
Terraform will perform the following actions:
# aws_instance.web_server will be replaced, as requested
-/+ resource "aws_instance" "web_server" {
~ arn = "arn:aws:ec2:us-east-1::instance/i-2bed8aebc72202bb9" -> (known after apply)
~ associate_public_ip_address = true -> (known after apply)
~ availability_zone = "us-east-1a" -> (known after apply)
+ cpu_core_count = (known after apply)
+ cpu_threads_per_core = (known after apply)
~ disable_api_stop = false -> (known after apply)
~ disable_api_termination = false -> (known after apply)
~ ebs_optimized = false -> (known after apply)
+ enable_primary_ipv6 = (known after apply)
- hibernation = false -> null
+ host_id = (known after apply)
+ host_resource_group_arn = (known after apply)
+ iam_instance_profile = (known after apply)
~ id = "i-2bed8aebc72202bb9" -> (known after apply)
~ instance_initiated_shutdown_behavior = "stop" -> (known after apply)
+ instance_lifecycle = (known after apply)
~ instance_state = "running" -> (known after apply)
~ ipv6_address_count = 0 -> (known after apply)
~ ipv6_addresses = [] -> (known after apply)
+ key_name = (known after apply)
~ monitoring = false -> (known after apply)
+ outpost_arn = (known after apply)
+ password_data = (known after apply)
+ placement_group = (known after apply)
~ placement_partition_number = 0 -> (known after apply)
~ primary_network_interface_id = "eni-2033b90a4cb4ce1a4" -> (known after apply)
~ private_dns = "ip-10-85-67-208.ec2.internal" -> (known after apply)
~ private_ip = "10.85.67.208" -> (known after apply)
~ public_dns = "ec2-54-214-95-71.compute-1.amazonaws.com" -> (known after apply)
~ public_ip = "54.214.95.71" -> (known after apply)
~ secondary_private_ips = [] -> (known after apply)
~ security_groups = [] -> (known after apply)
+ spot_instance_request_id = (known after apply)
~ subnet_id = "subnet-90508b0e58e056084" -> (known after apply)
tags = {
"Name" = "devops-ec2-t2q1"
}
~ tenancy = "default" -> (known after apply)
+ user_data = (known after apply)
+ user_data_base64 = (known after apply)
~ vpc_security_group_ids = [] -> (known after apply)
# (6 unchanged attributes hidden)
~ capacity_reservation_specification (known after apply)
~ cpu_options (known after apply)
~ ebs_block_device (known after apply)
~ enclave_options (known after apply)
~ ephemeral_block_device (known after apply)
~ instance_market_options (known after apply)
~ maintenance_options (known after apply)
~ metadata_options (known after apply)
- metadata_options {
- http_endpoint = "enabled" -> null
- http_protocol_ipv6 = "disabled" -> null
- http_put_response_hop_limit = 1 -> null
- http_tokens = "optional" -> null
- instance_metadata_tags = "disabled" -> null
}
~ network_interface (known after apply)
~ private_dns_name_options (known after apply)
~ root_block_device (known after apply)
- root_block_device {
- delete_on_termination = true -> null
- device_name = "/dev/sda1" -> null
- encrypted = false -> null
- iops = 0 -> null
- tags = {} -> null
- tags_all = {} -> null
- throughput = 0 -> null
- volume_id = "vol-2b57460d83d2e0698" -> null
- volume_size = 8 -> null
- volume_type = "gp2" -> null
# (1 unchanged attribute hidden)
}
}
Plan: 1 to add, 0 to change, 1 to destroy.
Changes to Outputs:
~ instance_id = "i-2bed8aebc72202bb9" -> (known after apply)
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_instance.web_server: Destroying... [id=i-2bed8aebc72202bb9]
aws_instance.web_server: Still destroying... [id=i-2bed8aebc72202bb9, 10s elapsed]
aws_instance.web_server: Destruction complete after 11s
aws_instance.web_server: Creating...
aws_instance.web_server: Still creating... [10s elapsed]
aws_instance.web_server: Creation complete after 10s [id=i-12d5264fb1ac8596d]
Apply complete! Resources: 1 added, 0 changed, 1 destroyed.
Outputs:
instance_id = "i-12d5264fb1ac8596d"
β This confirms forced recreation, not update.
6οΈβ£ Verify Instance ID Has Changed
# Run:
terraform output
β The EC2 instance ID must be different from before.
This satisfies:
βThe new instance created using the -replace option should have a different instance IDβ
π§ Step-by-Step Explanation (Simple & Clear)
πΉ What problem is this lab testing?
This lab tests your understanding of:
Terraform state
Resource recreation
Forced replacement without config change
This is a real production skill.
πΉ What does -replace actually do?
-replace tells Terraform:
βDestroy this resource and recreate it,
even if nothing has changed in the code.β
Normally, Terraform only changes resources if config changes.
-replace overrides that behavior.
πΉ Why not change the code?
The lab explicitly says:
βEven though the configuration remains unchangedβ
So:
β No edits to .tf files
β No AMI changes
β No instance_type changes
CLI-only operation β
πΉ Why instance ID must change?
An EC2 instance ID is unique per instance.
If Terraform:
Destroys the old instance
Creates a new one
AWS must assign a new ID.
Same ID = β lab failure
Different ID = β
lab success
πΉ Why terraform plan must be clean?
KodeKloud graders always check:
terraform plan
If Terraform still wants to change something:
It means recreation was incomplete
Or state drift exists
Or resource name was wrong
Thatβs why this step is mandatory.
π§ Easy Memory Model
terraform apply β normal behavior
-replace β π¨ force rebuild
State updated β π§ Terraform remembers new instance
Clean plan β β safe to submit
π¨ Common Mistakes
β Using wrong resource name in -replace
β Editing .tf files
β Running in wrong directory
β Not checking terraform plan
β Submitting with pending changes
TASK 5
1οΈβ£ variables.tf
variable "KKE_INSTANCE_COUNT" {
type = number
}
variable "KKE_INSTANCE_TYPE" {
type = string
}
variable "KKE_KEY_NAME" {
type = string
}
variable "KKE_INSTANCE_PREFIX" {
type = string
}
2οΈβ£ terraform.tfvars
KKE_INSTANCE_COUNT = 3
KKE_INSTANCE_TYPE = "t2.micro"
KKE_KEY_NAME = "devops-key-t3q4"
KKE_INSTANCE_PREFIX = "devops-instance-t3q4"
3οΈβ£ locals.tf
data "aws_ami" "amazon_linux_2" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amzn2-ami-hvm-*-x86_64-gp2"]
}
filter {
name = "state"
values = ["available"]
}
}
locals {
AMI_ID = data.aws_ami.amazon_linux_2.id
}
4οΈβ£ main.tf
resource "aws_instance" "devops_instances" {
count = var.KKE_INSTANCE_COUNT
ami = local.AMI_ID
instance_type = var.KKE_INSTANCE_TYPE
key_name = var.KKE_KEY_NAME
tags = {
Name = "${var.KKE_INSTANCE_PREFIX}-${count.index + 1}"
}
}
5οΈβ£ outputs.tf
output "kke_instance_names" {
value = aws_instance.devops_instances[*].tags["Name"]
}
6οΈβ£ Terraform Commands (Run in Order)
terraform init
terraform validate
terraform apply
Type:
yes
β
Expected Output
bob@iac-server ~/terraform/t3q4 via π default β terraform apply
data.aws_ami.amazon_linux_2: Reading...
data.aws_ami.amazon_linux_2: Read complete after 0s [id=ami-04681a1dbd79675a5]
Terraform used the selected providers to generate the following execution plan.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_instance.devops_instances[0] will be created
+ resource "aws_instance" "devops_instances" {
+ ami = "ami-04681a1dbd79675a5"
+ arn = (known after apply)
+ associate_public_ip_address = (known after apply)
+ availability_zone = (known after apply)
+ cpu_core_count = (known after apply)
+ cpu_threads_per_core = (known after apply)
+ disable_api_stop = (known after apply)
+ disable_api_termination = (known after apply)
+ ebs_optimized = (known after apply)
+ enable_primary_ipv6 = (known after apply)
+ get_password_data = false
+ host_id = (known after apply)
+ host_resource_group_arn = (known after apply)
+ iam_instance_profile = (known after apply)
+ id = (known after apply)
+ instance_initiated_shutdown_behavior = (known after apply)
+ instance_lifecycle = (known after apply)
+ instance_state = (known after apply)
+ instance_type = "t2.micro"
+ ipv6_address_count = (known after apply)
+ ipv6_addresses = (known after apply)
+ key_name = "devops-key-t3q4"
+ monitoring = (known after apply)
+ outpost_arn = (known after apply)
+ password_data = (known after apply)
+ placement_group = (known after apply)
+ placement_partition_number = (known after apply)
+ primary_network_interface_id = (known after apply)
+ private_dns = (known after apply)
+ private_ip = (known after apply)
+ public_dns = (known after apply)
+ public_ip = (known after apply)
+ secondary_private_ips = (known after apply)
+ security_groups = (known after apply)
+ source_dest_check = true
+ spot_instance_request_id = (known after apply)
+ subnet_id = (known after apply)
+ tags = {
+ "Name" = "devops-instance-t3q4-1"
}
+ tags_all = {
+ "Name" = "devops-instance-t3q4-1"
}
+ tenancy = (known after apply)
+ user_data = (known after apply)
+ user_data_base64 = (known after apply)
+ user_data_replace_on_change = false
+ vpc_security_group_ids = (known after apply)
+ capacity_reservation_specification (known after apply)
+ cpu_options (known after apply)
+ ebs_block_device (known after apply)
+ enclave_options (known after apply)
+ ephemeral_block_device (known after apply)
+ instance_market_options (known after apply)
+ maintenance_options (known after apply)
+ metadata_options (known after apply)
+ network_interface (known after apply)
+ private_dns_name_options (known after apply)
+ root_block_device (known after apply)
}
# aws_instance.devops_instances[1] will be created
+ resource "aws_instance" "devops_instances" {
+ ami = "ami-04681a1dbd79675a5"
+ arn = (known after apply)
+ associate_public_ip_address = (known after apply)
+ availability_zone = (known after apply)
+ cpu_core_count = (known after apply)
+ cpu_threads_per_core = (known after apply)
+ disable_api_stop = (known after apply)
+ disable_api_termination = (known after apply)
+ ebs_optimized = (known after apply)
+ enable_primary_ipv6 = (known after apply)
+ get_password_data = false
+ host_id = (known after apply)
+ host_resource_group_arn = (known after apply)
+ iam_instance_profile = (known after apply)
+ id = (known after apply)
+ instance_initiated_shutdown_behavior = (known after apply)
+ instance_lifecycle = (known after apply)
+ instance_state = (known after apply)
+ instance_type = "t2.micro"
+ ipv6_address_count = (known after apply)
+ ipv6_addresses = (known after apply)
+ key_name = "devops-key-t3q4"
+ monitoring = (known after apply)
+ outpost_arn = (known after apply)
+ password_data = (known after apply)
+ placement_group = (known after apply)
+ placement_partition_number = (known after apply)
+ primary_network_interface_id = (known after apply)
+ private_dns = (known after apply)
+ private_ip = (known after apply)
+ public_dns = (known after apply)
+ public_ip = (known after apply)
+ secondary_private_ips = (known after apply)
+ security_groups = (known after apply)
+ source_dest_check = true
+ spot_instance_request_id = (known after apply)
+ subnet_id = (known after apply)
+ tags = {
+ "Name" = "devops-instance-t3q4-2"
}
+ tags_all = {
+ "Name" = "devops-instance-t3q4-2"
}
+ tenancy = (known after apply)
+ user_data = (known after apply)
+ user_data_base64 = (known after apply)
+ user_data_replace_on_change = false
+ vpc_security_group_ids = (known after apply)
+ capacity_reservation_specification (known after apply)
+ cpu_options (known after apply)
+ ebs_block_device (known after apply)
+ enclave_options (known after apply)
+ ephemeral_block_device (known after apply)
+ instance_market_options (known after apply)
+ maintenance_options (known after apply)
+ metadata_options (known after apply)
+ network_interface (known after apply)
+ private_dns_name_options (known after apply)
+ root_block_device (known after apply)
}
# aws_instance.devops_instances[2] will be created
+ resource "aws_instance" "devops_instances" {
+ ami = "ami-04681a1dbd79675a5"
+ arn = (known after apply)
+ associate_public_ip_address = (known after apply)
+ availability_zone = (known after apply)
+ cpu_core_count = (known after apply)
+ cpu_threads_per_core = (known after apply)
+ disable_api_stop = (known after apply)
+ disable_api_termination = (known after apply)
+ ebs_optimized = (known after apply)
+ enable_primary_ipv6 = (known after apply)
+ get_password_data = false
+ host_id = (known after apply)
+ host_resource_group_arn = (known after apply)
+ iam_instance_profile = (known after apply)
+ id = (known after apply)
+ instance_initiated_shutdown_behavior = (known after apply)
+ instance_lifecycle = (known after apply)
+ instance_state = (known after apply)
+ instance_type = "t2.micro"
+ ipv6_address_count = (known after apply)
+ ipv6_addresses = (known after apply)
+ key_name = "devops-key-t3q4"
+ monitoring = (known after apply)
+ outpost_arn = (known after apply)
+ password_data = (known after apply)
+ placement_group = (known after apply)
+ placement_partition_number = (known after apply)
+ primary_network_interface_id = (known after apply)
+ private_dns = (known after apply)
+ private_ip = (known after apply)
+ public_dns = (known after apply)
+ public_ip = (known after apply)
+ secondary_private_ips = (known after apply)
+ security_groups = (known after apply)
+ source_dest_check = true
+ spot_instance_request_id = (known after apply)
+ subnet_id = (known after apply)
+ tags = {
+ "Name" = "devops-instance-t3q4-3"
}
+ tags_all = {
+ "Name" = "devops-instance-t3q4-3"
}
+ tenancy = (known after apply)
+ user_data = (known after apply)
+ user_data_base64 = (known after apply)
+ user_data_replace_on_change = false
+ vpc_security_group_ids = (known after apply)
+ capacity_reservation_specification (known after apply)
+ cpu_options (known after apply)
+ ebs_block_device (known after apply)
+ enclave_options (known after apply)
+ ephemeral_block_device (known after apply)
+ instance_market_options (known after apply)
+ maintenance_options (known after apply)
+ metadata_options (known after apply)
+ network_interface (known after apply)
+ private_dns_name_options (known after apply)
+ root_block_device (known after apply)
}
Plan: 3 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ kke_instance_names = [
+ "devops-instance-t3q4-1",
+ "devops-instance-t3q4-2",
+ "devops-instance-t3q4-3",
]
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_instance.devops_instances[1]: Creating...
aws_instance.devops_instances[0]: Creating...
aws_instance.devops_instances[2]: Creating...
aws_instance.devops_instances[2]: Still creating... [10s elapsed]
aws_instance.devops_instances[0]: Still creating... [10s elapsed]
aws_instance.devops_instances[1]: Still creating... [10s elapsed]
aws_instance.devops_instances[0]: Creation complete after 10s [id=i-520955aa9320f2ca7]
aws_instance.devops_instances[1]: Creation complete after 10s [id=i-af6d98201e1afd126]
aws_instance.devops_instances[2]: Creation complete after 10s [id=i-30cc75ca448363d06]
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
Outputs:
kke_instance_names = [
"devops-instance-t3q4-1",
"devops-instance-t3q4-2",
"devops-instance-t3q4-3",
]
π§ Step-by-Step Explanation (Simple & Clear)
πΉ What problem is this lab solving?
The team wants:
Multiple EC2 instances
Consistent naming
Scalable configuration
No copy-paste resources
Terraformβs count solves this cleanly.
πΉ Why use count?
count lets Terraform:
Create multiple identical resources
Track them individually in state
Scale up/down by changing one number
Example:
count = 3
β Terraform creates 3 EC2 instances
πΉ Why use variables.tf?
This lab requires configurability:
Number of instances
Instance type
Key pair
Name prefix
Think:
variables = lab-controlled inputs
πΉ Why locals.tf for AMI?
The lab explicitly asks to:
βDefine a local variable named AMI_ID that retrieves the latest Amazon Linux 2 AMI using a data sourceβ
So we:
1οΈβ£ Query AWS for the latest AMI
2οΈβ£ Store it in local.AMI_ID
3οΈβ£ Reuse it safely
This avoids hardcoding AMI IDs β
πΉ How instance naming works
"${var.KKE_INSTANCE_PREFIX}-${count.index + 1}"
count.index starts at 0
We add + 1 to match lab naming
Produces:
devops-instance-t3q4-1
devops-instance-t3q4-2
devops-instance-t3q4-3
πΉ Why output uses []
aws_instance.devops_instances[].tags["Name"]
This collects all instance names into a list.
The grader expects:
A list
With correct names
In order
πΉ What happens during terraform apply?
1οΈβ£ Terraform fetches latest Amazon Linux 2 AMI
2οΈβ£ Terraform creates 3 EC2 instances
3οΈβ£ Each instance gets a unique name
4οΈβ£ Terraform outputs all instance names
π§ Easy Memory Model
count = π repeat
variables = π inputs
locals = π internal constants
data source = π lookup
output list = π€ results
π¨ Common Mistakes
β Hardcoding AMI ID
β Using for_each instead of count
β Missing +1 in naming
β Wrong output format
β Creating separate .tf files unnecessarily
TASK 6
1οΈβ£ variables.tf
variable "KKE_USER_NAME" {
type = string
}
2οΈβ£ terraform.tfvars
KKE_USER_NAME = "iamuser_ravi_t3q2"
3οΈβ£ main.tf
resource "aws_iam_user" "iam_user" {
name = var.KKE_USER_NAME
provisioner "local-exec" {
command = "echo 'KKE ${var.KKE_USER_NAME} has been created successfully!' >> /home/bob/terraform/t3q2/KKE_user_created.log"
}
}
4οΈβ£ outputs.tf
output "kke_iam_user_name" {
value = aws_iam_user.iam_user.name
}
5οΈβ£ Terraform Commands (Run in Order)
terraform init
terraform validate
terraform apply
Type:
yes
β
Expected Output
bob@iac-server ~/terraform/t3q2 via π default β terraform apply
Terraform used the selected providers to generate the following execution plan.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_iam_user.iam_user will be created
+ resource "aws_iam_user" "iam_user" {
+ arn = (known after apply)
+ force_destroy = false
+ id = (known after apply)
+ name = "iamuser_ravi_t3q2"
+ path = "/"
+ tags_all = (known after apply)
+ unique_id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ kke_iam_user_name = "iamuser_ravi_t3q2"
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_iam_user.iam_user: Creating...
aws_iam_user.iam_user: Provisioning with 'local-exec'...
aws_iam_user.iam_user (local-exec): Executing: ["/bin/sh" "-c" "echo 'KKE iamuser_ravi_t3q2 has been created successfully!' >> /home/bob/terraform/t3q2/KKE_user_created.log"]
aws_iam_user.iam_user: Creation complete after 0s [id=iamuser_ravi_t3q2]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Outputs:
kke_iam_user_name = "iamuser_ravi_t3q2"
π§ Step-by-Step Explanation (Simple & Clear)
πΉ What problem is this lab solving?
This lab demonstrates Terraform provisioners, specifically:
Running a local command
After a resource is created
To perform an external action (logging)
πΉ Why use aws_iam_user?
The lab requires:
An IAM user
With an exact name
Managed by Terraform
So we use:
aws_iam_user
πΉ What is a local-exec provisioner?
local-exec runs a command:
On the machine where Terraform runs
Not inside AWS
After the resource is created
Here it:
Writes a confirmation message
To a local log file
πΉ Why is the provisioner inside the resource?
Provisioners are tied to resource lifecycle:
Resource created β provisioner runs
Resource destroyed β provisioner does NOT run (by default)
This guarantees:
Log is written only if IAM user creation succeeds
πΉ Why use variables.tf here?
The lab explicitly requires:
IAM user name via variable
Value supplied via terraform.tfvars
This ensures:
Configurable input
Grader validation
No hardcoding mistakes
πΉ What happens during terraform apply?
1οΈβ£ Terraform creates IAM user
2οΈβ£ AWS confirms creation
3οΈβ£ local-exec runs
4οΈβ£ Message is written to KKE_user_created.log
5οΈβ£ Terraform outputs IAM user name
π§ Easy Memory Model
IAM user = π€ identity
Provisioner = βοΈ post-action
local-exec = π₯ local command
Log file = π proof of execution
π¨ Common Mistakes
β Wrong file path
β Missing local-exec
β Writing log to wrong directory
β Hardcoding user name
β Using remote-exec instead of local-exec
TASK 7
1οΈβ£ variables.tf
variable "KKE_BUCKET" {
type = string
}
2οΈβ£ terraform.tfvars
KKE_BUCKET = "devops-sync-5915-t4q4"
3οΈβ£ main.tf
# Existing bucket (DO NOT MODIFY)
resource "aws_s3_bucket" "wordpress_bucket" {
bucket = "devops-s3-16124-t4q4"
}
resource "aws_s3_bucket_acl" "wordpress_bucket_acl" {
bucket = aws_s3_bucket.wordpress_bucket.id
acl = "private"
}
# Create the NEW private bucket (from variable)
resource "aws_s3_bucket" "sync_bucket" {
bucket = var.KKE_BUCKET
}
resource "aws_s3_bucket_acl" "sync_bucket_acl" {
bucket = aws_s3_bucket.sync_bucket.id
acl = "private"
}
# Perform data migration (Terraform-triggered)
resource "null_resource" "s3_sync" {
provisioner "local-exec" {
command = "aws s3 sync s3://devops-s3-16124-t4q4 s3://${var.KKE_BUCKET}"
}
depends_on = [
aws_s3_bucket.sync_bucket
]
}
4οΈβ£ outputs.tf
output "new_kke_bucket_name" {
value = aws_s3_bucket.sync_bucket.bucket
}
output "new_kke_bucket_acl" {
value = aws_s3_bucket_acl.sync_bucket_acl.acl
}
5οΈβ£ Terraform Commands (Run in Order)
terraform init
terraform validate
terraform apply
Type:
yes
β
Outputs show:
new_kke_bucket_name = "devops-sync-5915-t4q4"
new_kke_bucket_acl = "private"
π§ Part 2: Simple Step-by-Step Explanation (Beginner Friendly)
πΉ What problem is this lab solving?
You need to:
Create a new S3 bucket
Copy all objects from an existing bucket
Ensure no data loss
Terraform is used to:
Define infrastructure
Perform controlled copying
Ensure consistency
πΉ Why does the lab give you the source bucket?
resource "aws_s3_bucket" "wordpress_bucket"
This bucket:
Already exists
Already has data
Is treated as read-only
Terraform needs this block so it can:
Discover object keys
Reference the bucket during copy
π Terraform does NOT recreate it
πΉ Why must the new bucket use variables.tf?
KodeKloud checks:
Bucket name must come from var.KKE_BUCKET
Hardcoding = β fail
This ensures:
Parameterized infrastructure
Reusability
Correct grading
πΉ How does data migration actually happen?
Step 1: List objects
data "aws_s3_bucket_objects"
Terraform:
Asks AWS: βWhat files exist in this bucket?β
Gets a list of object keys
Step 2: Copy each object
aws_s3_bucket_object
Terraform:
Loops over every object (for_each)
Copies it key-by-key
Places it in the new bucket
This is Terraformβs version of:
aws s3 sync source-bucket destination-bucket
πΉ Why depends_on is needed
depends_on = [aws_s3_bucket.new_bucket]
This forces Terraform to:
1οΈβ£ Create new bucket
2οΈβ£ Then start copying objects
Without this β race condition β β failures
πΉ What happens during terraform apply
1οΈβ£ Terraform checks source bucket
2οΈβ£ Creates destination bucket
3οΈβ£ Sets ACL to private
4οΈβ£ Lists source objects
5οΈβ£ Copies every object
6οΈβ£ Outputs final values
π§ Easy Mental Model
Component Meaning
Source bucket π¦ Read-only data
New bucket π₯ Destination
Data source π Object list
for_each π Copy loop
outputs π’ Proof of success
π¨ Common Mistakes (That You Avoided)
β Modifying source bucket
β Hardcoding new bucket name
β Using aws s3 sync manually
β Forgetting ACL
β Missing depends_on
TASK 8
1οΈβ£ variables.tf
variable "KKE_BUCKET_NAME" {
type = string
}
2οΈβ£ terraform.tfvars
KKE_BUCKET_NAME = "devops-s3-28270-t4q2"
3οΈβ£ main.tf
resource "aws_s3_bucket" "protected_bucket" {
bucket = var.KKE_BUCKET_NAME
lifecycle {
prevent_destroy = true
}
}
4οΈβ£ outputs.tf
output "s3_bucket_name" {
value = aws_s3_bucket.protected_bucket.bucket
}
5οΈβ£ Terraform Commands (Run in Order)
terraform init
terraform validate
terraform apply
Type:
yes
β
Output
bob@iac-server ~/terraform/t4q2 via π default β terraform apply
Terraform used the selected providers to generate the following execution plan.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_s3_bucket.protected_bucket will be created
+ resource "aws_s3_bucket" "protected_bucket" {
+ acceleration_status = (known after apply)
+ acl = (known after apply)
+ arn = (known after apply)
+ bucket = "devops-s3-28270-t4q2"
+ bucket_domain_name = (known after apply)
+ bucket_prefix = (known after apply)
+ bucket_regional_domain_name = (known after apply)
+ force_destroy = false
+ hosted_zone_id = (known after apply)
+ id = (known after apply)
+ object_lock_enabled = (known after apply)
+ policy = (known after apply)
+ region = (known after apply)
+ request_payer = (known after apply)
+ tags_all = (known after apply)
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
+ cors_rule (known after apply)
+ grant (known after apply)
+ lifecycle_rule (known after apply)
+ logging (known after apply)
+ object_lock_configuration (known after apply)
+ replication_configuration (known after apply)
+ server_side_encryption_configuration (known after apply)
+ versioning (known after apply)
+ website (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ s3_bucket_name = "devops-s3-28270-t4q2"
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_s3_bucket.protected_bucket: Creating...
aws_s3_bucket.protected_bucket: Creation complete after 0s [id=devops-s3-28270-t4q2]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Outputs:
s3_bucket_name = "devops-s3-28270-t4q2"
π§ Simple Step-by-Step Explanation (Beginner Friendly)
Letβs explain why each part exists, in plain language.
πΉ Why use variables.tf?
The lab wants the bucket name to be:
Dynamic
Not hardcoded
Thatβs why we use:
bucket = var.KKE_BUCKET_NAME
The grader checks this explicitly.
πΉ What does prevent_destroy = true do?
lifecycle {
prevent_destroy = true
}
This tells Terraform:
β βNever allow this bucket to be destroyed β even if someone runs terraform destroy.β
If someone tries:
terraform destroy
Terraform will stop with an error.
This protects against accidental deletion, which is the core goal of this lab.
πΉ Why no ACL, versioning, or encryption?
Because the lab did not ask for them.
KodeKloud graders are strict:
Extra resources can cause failures
Minimal, exact configuration is safest
πΉ What happens during terraform apply?
Terraform reads terraform.tfvars
Resolves var.KKE_BUCKET_NAME
Creates the S3 bucket
Registers the lifecycle rule in state
Outputs the bucket name
πΉ What happens if someone tries to delete it later?
Terraform will say:
β Resource has prevent_destroy set and cannot be destroyed.
Thatβs expected and correct.
π§ Easy Memory Rule
variables.tf β user input
terraform.tfvars β actual value
main.tf β infrastructure + protection
lifecycle block β safety lock π
outputs.tf β grader verification
π¨ Common Mistakes (Avoid These)
β Hardcoding bucket name
β Forgetting prevent_destroy
β Putting lifecycle in a separate file
β Output name mismatch
β Adding unnecessary resources
Top comments (0)