Examples of how to use Terraform, work with its backends and modules.
Here will set up a simple EC2 instance in AWS and will store Terraform’s state-files in an AWS S3 bucket.
In short terms – but with real examples and links to documentation.
Installation on Arch Linux:
$ sudo pacman -S terraform
For authorization will use an existing AWS profile setevoy-root.
main.tf
Create a main.tf
file and configure an AWS provider, AWS region to be used, and AWS profile name:
provider "aws" {
region = "${var.aws-region}"
profile = "setevoy-root"
}
Here the aws
provider’s region
parameter defined in a variable which is set in the variables.tf
file:
variable "aws-region" {
default = "eu-west-1"
description = "Default Amazon region"
}
Next, in the main.tf
add a Terraform resource – AWS EC2:
provider "aws" {
region = "${var.aws-region}"
profile = "setevoy-root"
}
resource "aws_instance" "tf-example-ec2" {
ami = "ami-34414d4d"
instance_type = "t2.nano"
key_name = "${var.aws-key-name}"
associate_public_ip_address = "true"
tags {
"Name" = "tf-example-ec2"
"Env" = "${var.aws-cluster-name}"
}
}
And in the variables.tf
– a new variable with a key name to be used for SSH access – the tf-example-ec2-key in this example:
variable "aws-region" {
default = "eu-west-1"
description = "Default Amazon region"
}
variable "aws-key-name" {
default = "tf-example-ec2-key"
description = "EC2 acces key pair"
}
Terraform’s variables documentation is here>>>.
The first run
As we are using the aws
provider – Terraform need to download its plugin first.
To do it – use the init
:
$ terraform init
Initializing provider plugins...
- Checking for available provider plugins on https://releases.hashicorp.com...
- Downloading plugin for provider "aws" (1.24.0)...
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.
* provider.aws: version = "~> 1.24"
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
The aws
plugin file by default will be stored in a current directory where Terraform will create a.terraform
directory (path can be set using the -plugin-dir=
):
$ ls -la .terraform/plugins/linux_amd64/
total 74232
drwxr-xr-x 2 setevoy setevoy 4096 Jun 25 17:19 .
drwxr-xr-x 3 setevoy setevoy 4096 Jun 25 17:19 ..
-rwxr-xr-x 1 setevoy setevoy 79 Jun 25 17:19 lock.json
-rwxr-xr-x 1 setevoy setevoy 75997088 Jun 25 17:19 terraform-provider-aws_v1.24.0_x4
init
also will perform a backend storage configuration to store Terraform’s state-files – will speak about it a bit later.
tarraform plan
Now you can execute the plan
command to see what exactly will be performed by Terraform:
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
+ aws_instance.tf-example-ec2
id: <computed>
ami: "ami-34414d4d"
associate_public_ip_address: "true"
availability_zone: <computed>
ebs_block_device.#: <computed>
ephemeral_block_device.#: <computed>
get_password_data: "false"
instance_state: <computed>
instance_type: "t2.nano"
ipv6_address_count: <computed>
ipv6_addresses.#: <computed>
key_name: "tf-example-ec2-key"
network_interface.#: <computed>
network_interface_id: <computed>
password_data: <computed>
placement_group: <computed>
primary_network_interface_id: <computed>
private_dns: <computed>
private_ip: <computed>
public_dns: <computed>
public_ip: <computed>
root_block_device.#: <computed>
security_groups.#: <computed>
source_dest_check: "true"
subnet_id: <computed>
tags.%: "1"
tags.Name: "tf-example-ec2"
tenancy: <computed>
volume_tags.%: <computed>
vpc_security_group_ids.#: <computed>
Plan: 1 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
Here in its output, we can see that a new resource will be created (`+` symbol before `aws_instance.tf-example-ec2`).
“-
” means deleting the resource and “~
” modification of an existing resource.
terraform apply
To run exactly actions to create/delete/modify resources as described in the main.tf
– the apply
command must be used:
$ terraform apply
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
+ aws_instance.tf-example-ec2
id: <computed>
ami: "ami-34414d4d"
associate_public_ip_address: "true"
availability_zone: <computed>
ebs_block_device.#: <computed>
ephemeral_block_device.#: <computed>
get_password_data: "false"
instance_state: <computed>
instance_type: "t2.nano"
ipv6_address_count: <computed>
ipv6_addresses.#: <computed>
key_name: "tf-example-ec2-key"
network_interface.#: <computed>
network_interface_id: <computed>
password_data: <computed>
placement_group: <computed>
primary_network_interface_id: <computed>
private_dns: <computed>
private_ip: <computed>
public_dns: <computed>
public_ip: <computed>
root_block_device.#: <computed>
security_groups.#: <computed>
source_dest_check: "true"
subnet_id: <computed>
tags.%: "1"
tags.Name: "tf-example-ec2"
tenancy: <computed>
volume_tags.%: <computed>
vpc_security_group_ids.#: <computed>
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_instance.tf-example-ec2: Creating...
ami: "" => "ami-34414d4d"
associate_public_ip_address: "" => "true"
availability_zone: "" => "<computed>"
ebs_block_device.#: "" => "<computed>"
ephemeral_block_device.#: "" => "<computed>"
get_password_data: "" => "false"
instance_state: "" => "<computed>"
instance_type: "" => "t2.nano"
ipv6_address_count: "" => "<computed>"
ipv6_addresses.#: "" => "<computed>"
key_name: "" => "tf-example-ec2-key"
network_interface.#: "" => "<computed>"
network_interface_id: "" => "<computed>"
password_data: "" => "<computed>"
placement_group: "" => "<computed>"
primary_network_interface_id: "" => "<computed>"
private_dns: "" => "<computed>"
private_ip: "" => "<computed>"
public_dns: "" => "<computed>"
public_ip: "" => "<computed>"
root_block_device.#: "" => "<computed>"
security_groups.#: "" => "<computed>"
source_dest_check: "" => "true"
subnet_id: "" => "<computed>"
tags.%: "" => "1"
tags.Name: "" => "tf-example-ec2"
tenancy: "" => "<computed>"
volume_tags.%: "" => "<computed>"
vpc_security_group_ids.#: "" => "<computed>"
aws_instance.tf-example-ec2: Still creating... (10s elapsed)
aws_instance.tf-example-ec2: Still creating... (20s elapsed)
aws_instance.tf-example-ec2: Still creating... (30s elapsed)
aws_instance.tf-example-ec2: Still creating... (40s elapsed)
aws_instance.tf-example-ec2: Creation complete after 47s (ID: i-062174155cee10e51)
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Now let’s check the EC2’s ID which was returned in the Terraform’s output:
$ aws ec2 describe-instances --instance-ids i-062174155cee10e51
{
"Reservations": [
{
"Groups": [],
"Instances": [
{
"AmiLaunchIndex": 0,
"ImageId": "ami-34414d4d",
"InstanceId": "i-062174155cee10e51",
"InstanceType": "t2.nano",
"KeyName": "tf-example-ec2-key",
"LaunchTime": "2018-06-25T14:36:09.000Z",
"Monitoring": {
"State": "disabled"
},
...
terraform destroy
To delete resources created from our main.tf
– use the destroy
command:
$ terraform destroy
aws_instance.tf-example-ec2: Refreshing state... (ID: i-062174155cee10e51)
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
- destroy
Terraform will perform the following actions:
- aws_instance.tf-example-ec2
Plan: 0 to add, 0 to change, 1 to destroy.
Do you really want to destroy?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
aws_instance.tf-example-ec2: Destroying... (ID: i-062174155cee10e51)
aws_instance.tf-example-ec2: Still destroying... (ID: i-062174155cee10e51, 10s elapsed)
aws_instance.tf-example-ec2: Still destroying... (ID: i-062174155cee10e51, 20s elapsed)
aws_instance.tf-example-ec2: Still destroying... (ID: i-062174155cee10e51, 30s elapsed)
aws_instance.tf-example-ec2: Destruction complete after 32s
Destroy complete! Resources: 1 destroyed.
State-files
Terraform keeps its information about infrastructure in such called “state-files”.
By default, this will be a terraform.tfstate
in a project’s directory.
For example, if you’ll check it now – here will be zero resources as we just performed destroy
:
$ cat terraform.tfstate
{
"version": 3,
"terraform_version": "0.11.7",
"serial": 3,
"lineage": "a83f41ba-0a1e-ec21-cc4c-312940dfb53f",
"modules": [
{
"path": [
"root"
],
"outputs": {},
"resources": {},
"depends_on": []
}
]
}
Before running apply
– Terraform will create a backup of the existing state-file (if it’s present) in the terraform.tfstate.backup
– now you can see resources which were created before thedestroy
:
$ cat terraform.tfstate.backup
{
"version": 3,
"terraform_version": "0.11.7",
"serial": 3,
"lineage": "a83f41ba-0a1e-ec21-cc4c-312940dfb53f",
"modules": [
{
"path": [
"root"
],
"outputs": {},
"resources": {
"aws_instance.tf-example-ec2": {
"type": "aws_instance",
"depends_on": [],
"primary": {
"id": "i-062174155cee10e51",
...
Filename which will keep infrastructure state can be set using the -state
and a backup file name – with the-backup
.
Let’s check – create a ЕС2 again and let’s use -auto-approve
to avoid entering yes each time:
$ terraform apply -state /tmp/tf-example-ec2.tfstate -backup /tmp/tf-example-ec2.tfstate.backup -auto-approve
Check your state-file:
$ cat /tmp/tf-example-ec2.tfstate
{
"version": 3,
"terraform_version": "0.11.7",
"serial": 4,
"lineage": "a83f41ba-0a1e-ec21-cc4c-312940dfb53f",
"modules": [
{
"path": [
"root"
],
"outputs": {},
"resources": {
"aws_instance.tf-example-ec2": {
"type": "aws_instance",
"depends_on": [],
"primary": {
"id": "i-01b63239677ef31e8",
...
Similarly when using destroy
you have to specify the same state-file – /tmp/tf-example-ec2.tfstate
:
$ terraform destroy -state /tmp/tf-example-ec2.tfstate -backup /tmp/tf-example-ec2.tfstate.backup -auto-approve
aws_instance.tf-example-ec2: Refreshing state... (ID: i-0c6779781dabd81d8)
aws_instance.tf-example-ec2: Destroying... (ID: i-0c6779781dabd81d8)
...
aws_instance.tf-example-ec2: Destruction complete after 31s
Destroy complete! Resources: 1 destroyed.
terraform state
To work with state-files Terraform has the state
command.
For example, to check already existing resources in a state-file you can use its list
command:
$ terraform state list -state=/tmp/tf-example-ec2.tfstate
aws_instance.tf-example-ec2
Backends
By default, Terraform will create files locally but also a remote storage may be used.
Most used storage (at least from my own practice) is an AWS S3 bucket.
Let’s update our main.tf
and describe an S3 bucket to store state-files – add the backend "s3"
config:
provider "aws" {
region = "${var.aws-region}"
profile = "setevoy-root"
}
resource "aws_instance" "tf-example-ec2" {
ami = "ami-34414d4d"
instance_type = "t2.nano"
key_name = "${var.aws-key-name}"
associate_public_ip_address = "true"
tags {
"Name" = "tf-example-ec2"
}
}
terraform {
backend "s3" {
profile = "setevoy-root"
bucket = "tf-example-states"
key = "tf-example/terraform.tfstate"
region = "eu-west-1"
}
}
Create this bucket:
$ aws s3api create-bucket --bucket tf-example-states --region eu-west-1 --create-bucket-configuration LocationConstraint=eu-west-1
{
"Location": "http://tf-example-states.s3.amazonaws.com/"
}
As we added s3 backend – run init
once again:
$ terraform init
Initializing the backend...
Do you want to copy existing state to the new backend?
Pre-existing state was found while migrating the previous "local" backend to the
newly configured "s3" backend. No existing state was found in the newly
configured "s3" backend. Do you want to copy this state to the new "s3"
backend? Enter "yes" to copy and "no" to start with an empty state.
Enter a value: yes
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.
* provider.aws: version = "~> 1.24"
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Terraform already copied the state-file to this bucket:
$ aws s3 ls s3://tf-example-states/tf-example/
2018-06-25 18:40:11 317 terraform.tfstate
Now, all changes in a project will be saved to this file in this bucket and this is a great approach when you use Terraform in some kind of automation, for example – when you are running your Terraform from a temporary Docker container in a Jenkin’s job (see the AWS: билд Java + Maven + Docker + Packer + Terraform (Rus) post for example).
Terraform modules
What else needs to be mentioned is Terraform modules which allow to keep resources separately and facilitates their management.
For example our main.tf
file is our project’s root-module where we describe our resources.
In the same way, you can create a new directory, for example ec2
, and inside it describe your EC2 instance and keep its own variables inside this module:
$ mkdir ec2
Now create a new file ec2.tf
here and variables file – variables.tf
:
touch ec2/{ec2.tf,variables.tf}
In the ec2/ec2.tf
file describe your EC2 resource:
resource "aws_instance" "tf-example-ec2" {
ami = "${var.aws-ec2-ami-id}"
instance_type = "${var.aws-ec2-type}"
key_name = "${var.aws-key-name}"
associate_public_ip_address = "true"
tags {
"Name" = "tf-example-ec2"
}
}
And variables:
variable "aws-ec2-ami-id" {
default = "ami-34414d4d"
}
variable "aws-ec2-type" {
default = "t2.nano"
}
variable "aws-key-name" {
default = "tf-example-ec2-key"
}
Next – add this ec2
module to be called from the main.tf
: (notice that the old “resource "aws_instance"
” is commented with the /* ... */
):
provider "aws" {
region = "${var.aws-region}"
profile = "setevoy-root"
}
module "ec2" {
source = "ec2"
}
/*
resource "aws_instance" "tf-example-ec2" {
ami = "ami-34414d4d"
instance_type = "t2.nano"
key_name = "${var.aws-key-name}"
associate_public_ip_address = "true"
tags {
"Name" = "tf-example-ec2"
}
}
*/
terraform {
backend "s3" {
profile = "setevoy-root"
bucket = "tf-example-states"
key = "tf-example/terraform.tfstate"
region = "eu-west-1"
}
}
And again as we added a new module – call the init
:
$ terraform init
Initializing modules...
- module.ec2
Getting source "ec2"
Initializing the backend...
Initializing provider plugins...
...
Now call the plan
:
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
+ module.ec2.aws_instance.tf-example-ec2
id: <computed>
ami: "ami-34414d4d"
...
Run the apply
now:
$ terraform apply -auto-approve
module.ec2.aws_instance.tf-example-ec2: Creating...
ami: "" => "ami-34414d4d"
associate_public_ip_address: "" => "true"
availability_zone: "" => "<computed>"
ebs_block_device.#: "" => "<computed>"
ephemeral_block_device.#: "" => "<computed>"
get_password_data: "" => "false"
instance_state: "" => "<computed>"
instance_type: "" => "t2.nano"
ipv6_address_count: "" => "<computed>"
ipv6_addresses.#: "" => "<computed>"
key_name: "" => "tf-example-ec2-key"
network_interface.#: "" => "<computed>"
network_interface_id: "" => "<computed>"
password_data: "" => "<computed>"
placement_group: "" => "<computed>"
primary_network_interface_id: "" => "<computed>"
private_dns: "" => "<computed>"
private_ip: "" => "<computed>"
public_dns: "" => "<computed>"
public_ip: "" => "<computed>"
root_block_device.#: "" => "<computed>"
security_groups.#: "" => "<computed>"
source_dest_check: "" => "true"
subnet_id: "" => "<computed>"
tags.%: "" => "1"
tags.Name: "" => "tf-example-ec2"
tenancy: "" => "<computed>"
volume_tags.%: "" => "<computed>"
vpc_security_group_ids.#: "" => "<computed>"
module.ec2.aws_instance.tf-example-ec2: Still creating... (10s elapsed)
module.ec2.aws_instance.tf-example-ec2: Creation complete after 15s (ID: i-08d703ddd4f252382)
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
In general, that’s all need to know to start working with the Terraform.
Similar posts
- 10/31/2015 Terraform: создание проекта и запуск AWS EC2 (0)
- 05/03/2017 AWS [China]: начало (0)
- 02/20/2017 AWS: билд Java + Maven + Docker + Packer + Terraform (0)
- 06/28/2018 Terraform: создание проекта с EC2, VPC и AWS cross-region VPC peering (0)
Top comments (0)