Today we are going to solve the challenge lab Import an AWS resource created by Cybr.
This lab will test our skills to see if we can successfully import an S3 bucket which has already been created into our terraform state and configuration.
So let's take a look at the scenario presented:
Scenario 👨🔬
For our scenario, let’s pretend that you’ve already completed this course, and you go to your team to tell them you need to start using Terraform and IaC to manage all of your infrastructure in AWS accounts. You decide to start with one of the easiest accounts that has the fewest resources. That account has an Amazon S3 bucket that was manually created. You’d like to start by importing that resource.
It’s imperative that you not change any of the bucket’s existing settings/configurations! You are only importing the existing resource, not applying any changes to that bucket.
You’ve completed this step when you get the following message:
❯ terraform plan
...
No changes. Your infrastructure matches the configuration.Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.
Once you’ve important the bucket successfully, go ahead and delete it with terraform destroy!
You’ve completed this step when you get the following message:
❯ terraform destroy
aws_s3_bucket.bucket: Refreshing state...
...Plan: 0 to add, 0 to change, 1 to destroy.
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.Enter a value: yes
aws_s3_bucket.bucket: Destroying... [id=cybrlab-import-bucket-272281913033]
aws_s3_bucket.bucket: Destruction complete after 0sDestroy complete! Resources: 1 destroyed.
And when running this command returns zero buckets:
❯ aws s3api list-buckets
Good luck and have fun!
We are also given the following hints:
Tip #1
Here’s the Amazon S3 CLI documentation and list-buckets will probably be helpful.
Tip #2
Here’s a link to the AWS provider documentation for convenience.
Tip #3
A good starting point is to create three files: main.tf, provider.tf, variables.tf. Start by configuring those.
Tip #4
You are very likely to encounter errors when importing resources with Terraform, especially when running terraform plan after importing. This is normal and part of the troubleshooting process! Read the error codes — they are usually very helpful.
Bonus Points
For bonus points, if you get this warning, find a way to get rid of it!
Warning: Argument is deprecated
Note: There are several valid methods of completing this lab. I am just choosing the one of them. Also note that the resource blocks and outputs have been sanitized - you will need to fill these in with the values you get back
We are first going to set up our profile with provided credentials. Press Start Lab
on the lab webpage to reveal our Access Key ID and Secret Access Key. In our terminal we enter: aws configure --profile cybr. We enter the generated values as follows:
We are good to go. Let's start creating our required files.
We begin with our provider.tf
file. To get the latest provider version we head over to the Terraform AWS Registry and in the top right of the page select USE PROVIDER
which will drop down the code blocks we need.
We are going to add some configuration to our AWS provider block to use a variable for the region (which we will set soon) and the profile we want to use (which we set up earlier).
Our provider.tf
will look like this:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "6.3.0"
}
}
}
provider "aws" {
# Configuration options
region = var.aws_region
profile = "cybr"
}
Now, we write our variables.tf file like so:
variable "aws_region" {
description = "The AWS region to deploy in"
type = string
default = "us-east-1"
}
Here we are setting our region to us-east-1
.
It's time to implement out main.tf
file. Before we attempt to create any resource blocks, we are going to need to find the existing S3 bucket. In our terminal we run the command aws s3api list-buckets --profile cybr
.
Nowe we get back a JSON object that shows the details of the existing bucket that will look similar to this (sanitized and anonymized):
{
"Buckets": [
{
"Name": "demo-import-bucket",
"CreationDate": "2025-07-16T20:26:20+00:00"
}
],
"Owner": {
"DisplayName": "example-user",
"ID":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
"Prefix": null
}
Once we grab the name we can write our code using a minimal resource block as follows:
import {
to = aws_s3_bucket.bucket_to_import
id = "demo-import-bucket"
}
resource "aws_s3_bucket" "bucket_to_import" {}
In the resource block replace id with the actual name of the bucket you got back from the terminal command above.
Now, in the terminal run a terraform init
.
It would be a good idea to also run a terraform validate
to ensure our code doesn't have any glaring issues.
Now we run a terraform plan
followed by a terraform apply
. We will see our resource being imported and the state file being created.
Follow that up with a terraform state list
. We should see aws_s3_bucket.bucket_to_import
.
Let's see the details so we can grab them and drop them into our resource block. We run a terraform state show aws_s3_bucket.bucket_to_import
Copy the following corresponding values from the output; bucket
and force_destroy
and update the resource block. Additionally, we are going to set force_destory = true
.
Our main.tf will look like this:
import {
to = aws_s3_bucket.bucket_to_import
id = "demo-import-bucket"
}
resource "aws_s3_bucket" "bucket_to_import" {
bucket = "demo-import-bucket"
force_destroy = true
}
Now run a terraform plan
again to confirm what changes we are going to make.
If satisfied, run a terraform apply -auto-approve
which will confirm and apply the changes.
Great! Now our rogue resource is being managed by terraform.
It's time to clean up our resources. We can (optionally) remove the import block in our main.tf
file as we no longer need it. Subsequently, we can run a terraform destroy and confirm our choice.
This will destroy our S3 bucket. We can confirm this with an aws s3api list-buckets --profile cybr
command run in the terminal. We should see no buckets listed.
If everything has worked as expected, press the terminate lab button and you have successfully completed the challenge lab.
To view the files we created in their final form, see the below:
Top comments (0)