One of the disappointing surprises in my AWS CloudFormation experience was the fact that it wasn’t able to automatically create cross-region VPC-peering connections.
Note : this post originally was written in Russian on 28 June 2018 but now CloudFormation can do it, check the PeerRegion
parameter of the AWS::EC2::VPCPeeringConnection
resource.
As a result – it tries to create a connection in the same region and obviously – fails.
On the AWS forums found a topic from users – but without AWS developers comment yet.
To solve this issue I had to use Terraform for our Jenkins stack (it’s located in Europe while other our resources hosted in USA AWS regions).
Below is an example of creating AWS EC2, VPC and VPC cross-region peering connection.
In this stack (eu-west-1) a Jenkins instance will be running which will be connected to our Prometheus monitoring stack in the us-east-2 region using AWS VPC peering.
Project’s structure
Files and catalogs will look like next:
$ tree terraform/
terraform/
├── ec2
│ ├── ec2_icmp_and_default_sg.tf
│ ├── ec2.tf
│ ├── jenkins_security_group.tf
│ └── variables.tf
├── main.tf
├── terraform_exec.sh
├── variables.tf
└── vpc
├── variables.tf
└── vpc.tf
Here ec2
and vpc
directories are Terraform’s modules, and the terraform_exec.sh
script is used to run Terraform’s plan
/apply
/destroy
commands with necessary options and parameters.
The script
The script is just kind of draft of how Terraform will be executed in a Jenkins job and used just for the convenience to avoid repeating the same set of options each time.
In our project an AWS S3 bucket will be used as backend storage for its state-files and is initialized in the terraform_config()
function.
Then during stack creation the terraform_plan()
, function will be called which will ask for confirmation, then terraform_apply()
. Similarly – during destroy
.
The script itself:
#/usr/bin/env bash
HELP="\n\t-a: apply
\n\t-D: delete
\n\t-e: environement to be used, default to \"dev\"\n\t"
# set default action to apply
apply=1
destroy=
while getopts "aDe:h" opt; do
case $opt in
a)
apply=1
;;
D)
apply=
destroy=1
;;
e)
ENV=$OPTARG
;;
h)
echo -e $HELP
exit 0
;;
?)
echo -e $HELP && exit 1
;;
esac
done
# global vars
[[ -z $ENV ]] && ENV="dev"
AWS_PROFILE="jenkins-ci-provisioning"
AWS_REGION="eu-west-1"
CLUSTER_NAME="jenkins-ci-$ENV"
# monitoring peering data
MON_PROD_VPC_ID="vpc-51e8b639"
MON_PROD_REGION="us-east-2"
MON_PROD_VPC_CIDR="10.0.1.0/24"
# terraform backend vars
TF_BE_S3_BUCKET="terraform-$CLUSTER_NAME"
TF_BE_S3_STATE_KEY="$TF_BE_S3_BUCKET.tfstate"
echo -e "\nENV=$ENV
AWS CLI profile: $AWS_PROFILE
AWS region: $AWS_REGION
Application cluster name: $CLUSTER_NAME
Terraform backend S3 bucket name: $TF_BE_S3_BUCKET
Terraform backend key filename: $TF_BE_S3_STATE_KEY
"
read -p "Are you sure to proceed? [y/n] " -r
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
exit 1
fi
# load modules
terraform get
# setup backend
terraform_config () {
terraform init \
-backend-config="bucket=$TF_BE_S3_BUCKET" \
-backend-config="key=$TF_BE_S3_STATE_KEY" \
-backend-config="region=$AWS_REGION" \
-backend-config="profile=$AWS_PROFILE"
}
terraform_plan () {
terraform plan \
-var env=$ENV \
-var aws-region=$AWS_REGION \
-var aws-profile=$AWS_PROFILE \
-var cluster-name=$CLUSTER_NAME \
-var monitoring-prod-vpc-id=$MON_PROD_VPC_ID \
-var monitoring-prod-region=$MON_PROD_REGION \
-var monitoring-prod-vpc-cidr=$MON_PROD_VPC_CIDR
}
terraform_apply () {
terraform apply \
-var env=$ENV \
-var aws-region=$AWS_REGION \
-var aws-profile=$AWS_PROFILE \
-var cluster-name=$CLUSTER_NAME \
-var monitoring-prod-vpc-id=$MON_PROD_VPC_ID \
-var monitoring-prod-region=$MON_PROD_REGION \
-var monitoring-prod-vpc-cidr=$MON_PROD_VPC_CIDR
}
terraform_destroy () {
terraform plan -destroy \
-var env=$ENV \
-var aws-region=$AWS_REGION \
-var aws-profile=$AWS_PROFILE \
-var cluster-name=$CLUSTER_NAME \
-var monitoring-prod-vpc-id=$MON_PROD_VPC_ID \
-var monitoring-prod-region=$MON_PROD_REGION \
-var monitoring-prod-vpc-cidr=$MON_PROD_VPC_CIDR
echo
read -p "Plan complete. Are you sure to proceed? [y/n] " -r
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
exit 1
fi
terraform destroy \
-var env=$ENV \
-var aws-region=$AWS_REGION \
-var aws-profile=$AWS_PROFILE \
-var cluster-name=$CLUSTER_NAME \
-var monitoring-prod-vpc-id=$MON_PROD_VPC_ID \
-var monitoring-prod-region=$MON_PROD_REGION \
-var monitoring-prod-vpc-cidr=$MON_PROD_VPC_CIDR
}
apply () {
terraform_config || exit 1
terraform_plan || exit 1
read -p "Plan complete. Are you sure to proceed? [y/n] " -r
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
exit 1
fi
echo
terraform_apply || exit 1
}
if [[ $apply == 1 ]]; then
echo -e "\nRunning Apply action..."
apply
elif [[ $destroy == 1 ]]; then
echo -e "\nRunning Destroy action..."
terraform_destroy
else
echo -e "\nERROR: action does not set, exiting."
exit 1
fi
Terraform
main.tf
In the main.tf
a backend is configured, a provider will be created and the availability zones list will be obtained.
Then two modules will be called – ec2
and vpc
with necessary variables, some of them are defined in gloval variables and some – from the main.tf
itself (the eu-west-1a value will get from the data.aws_availability_zones.available.names[0]
).
The file’s content:
terraform {
backend "s3" {
}
}
provider "aws" {
region = "${var.aws-region}"
profile = "${var.aws-profile}"
}
data "aws_availability_zones" "available" {
state = "available"
}
module "vpc" {
source = "vpc"
env = "${var.env}"
aws-availability-zone = "${data.aws_availability_zones.available.names[0]}"
monitoring-prod-vpc-id = "${var.monitoring-prod-vpc-id}"
monitoring-prod-region = "${var.monitoring-prod-region}"
monitoring-prod-vpc-cidr = "${var.monitoring-prod-vpc-cidr}"
aws-profile = "${var.aws-profile}"
}
module "ec2" {
source = "ec2"
env = "${var.env}"
aws-availability-zone = "${data.aws_availability_zones.available.names[0]}"
jenkins-public-subnet-id = "${module.vpc.jenkins-public-subnet-id}"
jenkins-vpc-id = "${module.vpc.jenkins-vpc-id}"
}
The ec2
module
Here is an EC2 instance described, an EBS volume to be attached with all Jenkin’s data, and an Elastic IP will be attached.
Module’s content:
$ ls -l ec2/
total 16
-rw-r--r-- 1 setevoy setevoy 440 Jun 27 16:22 ec2_icmp_and_default_sg.tf
-rw-r--r-- 1 setevoy setevoy 1176 Jun 27 15:20 ec2.tf
-rw-r--r-- 1 setevoy setevoy 1032 Jun 28 10:25 jenkins_security_group.tf
-rw-r--r-- 1 setevoy setevoy 1611 Jun 27 14:27 variables.tf
Module’s main file – ec2.tf
:
resource "aws_volume_attachment" "jenkins-data-ebs-attach" {
device_name = "/dev/xvdb"
volume_id = "${lookup(var.ec2-data-ebs-id, var.env)}"
instance_id = "${aws_instance.jenkins-ec2.id}"
}
resource "aws_instance" "jenkins-ec2" {
ami = "${var.aws-ec2-ami-id}"
instance_type = "${lookup(var.aws-ec2-type, var.env)}"
key_name = "${lookup(var.aws-key-name, var.env)}"
associate_public_ip_address = "true"
availability_zone = "${var.aws-availability-zone}"
vpc_security_group_ids = ["${aws_security_group.jenkins-web-ssh-sg.id}", "${aws_security_group.jenkins-default-sg.id}"]
subnet_id = "${var.jenkins-public-subnet-id}"
tags {
"Name" = "jenkins-ec2-${var.env}"
}
}
resource "aws_eip" "jenkins-eip" {
instance = "${aws_instance.jenkins-ec2.id}"
vpc = true
tags {
"Name" = "jenkins-ec2-${var.env}-eip"
}
}
VPC subnet will be set from the vpc
module’s outputs.
Dev/Production in Terraform
To redefine variables values for Dev and Production environments in Terraform you can use at least three approaches.
variables mapping
The first one which is used in this example – using variables mapping.
For example the instance_type
parameter for the ec2
module and its aws_instance
resource created by the next way:
...
instance_type = "${lookup(var.aws-ec2-type, var.env)}"
...
Then in the variables.tf
a mapping is created which contains two values with two instances types:
...
variable "aws-ec2-type" {
description = "EC2 instance type for Dev and prod"
type = "map"
default = {
"dev" = "t2.nano"
"production" = "t2.medium"
}
}
...
Here depending on the env
variable value, which is set in bash-script above, one of the values will be chosen – t2.nano
for Dev, or t2.medium
for Production.
End/Prod directories
Another approach which is more flexible and allows to use the modules concept in a more correct way is to use create different directories – for example, develop and production, each with its own main.tf
and variables.tf
files.
Then from a main.tf
file in such a directory, the ec2
can be called, and from its variables.tf
– necessary variables with appropriate values.
Terraform workspaces
And the third one is by using Terraform's workspaces concept. But yet its purpose is a bit another (to test project without making changes in a state-files of a current infrastructure).
Anyway, as per documentation – it can be used as well:
...
resource "aws_instance" "example" {
count = "${terraform.workspace == "default" ? 5 : 1}"
# ... other arguments
}
...
The vpc
module
And the last one – the vpc
module example:
$ ls -l vpc/
total 8
-rw-r--r-- 1 setevoy setevoy 1073 Jun 28 10:37 variables.tf
-rw-r--r-- 1 setevoy setevoy 2382 Jun 28 10:39 vpc.tf
Iits main file – vpc.tf
:
provider "aws" {
alias = "peer"
region = "${var.monitoring-prod-region}"
profile = "${var.aws-profile}"
}
resource "aws_vpc" "jenkins-vpc" {
cidr_block = "${lookup(var.jenkins-vpc-cidr, var.env)}"
assign_generated_ipv6_cidr_block = true
enable_dns_hostnames = true
tags {
"Name" = "jenkins-${var.env}-vpc"
}
}
resource "aws_subnet" "jenkins-public-subnet" {
vpc_id = "${aws_vpc.jenkins-vpc.id}"
cidr_block = "${lookup(var.jenkins-pub-subnet-cidr, var.env)}"
availability_zone = "${var.aws-availability-zone}"
tags {
"Name" = "jenkins-${var.env}-pub-net"
}
}
resource "aws_internet_gateway" "jenkins-igw" {
vpc_id = "${aws_vpc.jenkins-vpc.id}"
}
resource "aws_route_table" "jenkins-route-tbl" {
vpc_id = "${aws_vpc.jenkins-vpc.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.jenkins-igw.id}"
}
route {
cidr_block = "${var.monitoring-prod-vpc-cidr}"
gateway_id = "${aws_vpc_peering_connection.monitoring-prod-vpc-peer.id}"
}
tags {
Name = "jenkins-${var.env}-route-table"
}
}
resource "aws_route_table_association" "public-assoc" {
subnet_id = "${aws_subnet.jenkins-public-subnet.id}"
route_table_id = "${aws_route_table.jenkins-route-tbl.id}"
}
resource "aws_vpc_peering_connection" "monitoring-prod-vpc-peer" {
peer_vpc_id = "${var.monitoring-prod-vpc-id}"
vpc_id = "${aws_vpc.jenkins-vpc.id}"
peer_region ="${var.monitoring-prod-region}"
tags {
Name = "VPC Peering Jenkins and Monitoring Prod"
}
}
resource "aws_vpc_peering_connection_accepter" "monitoring-peer-accepter" {
provider = "aws.peer"
vpc_peering_connection_id = "${aws_vpc_peering_connection.monitoring-prod-vpc-peer.id}"
auto_accept = true
}
output "jenkins-public-subnet-id" {
value = "${aws_subnet.jenkins-public-subnet.id}"
}
output "jenkins-vpc-id" {
value = "${aws_vpc.jenkins-vpc.id}"
}
cross-region VPC peering
VPC peering will be created by using the aws_vpc_peering_connection
resource:
...
resource "aws_vpc_peering_connection" "monitoring-prod-vpc-peer" {
peer_vpc_id = "${var.monitoring-prod-vpc-id}"
vpc_id = "${aws_vpc.jenkins-vpc.id}"
peer_region ="${var.monitoring-prod-region}"
tags {
Name = "VPC Peering Jenkins and Monitoring Prod"
}
}
...
Which will get a monitoring stack’s region (es-east-2) in the peer_region
parameter, which will be passed via the terraform_exec.sh
script’s global variable $MON_PROD_REGION
.
To activate a peering connection – the aws_vpc_peering_connection_accepter
will be used with an additional aws-provider with a region and an alias
:
...
provider "aws" {
alias = "peer"
region = "${var.monitoring-prod-region}"
profile = "${var.aws-profile}"
}
...
The monitoring-prod-region
variable also is defined in the terraform_exec.sh
in the $MON_PROD_REGION
variable.
Similarly to the ec2
module – here is mapping used to separate dev/prod values, for example, VPC CIDR-s:
...
variable "jenkins-vpc-cidr" {
type = "map"
default = {
"dev" = "10.0.4.0/24"
"production" = "10.0.5.0/24"
}
}
...
A stack creation
“And now with all this sh*t on a board we will try to fly” (с)
Let’s run our stack creation (just a habit from the CloudFormation to use the “stack” name):
$ ./terraform_exec.sh -a
ENV=dev
AWS CLI profile: jenkins-ci-provisioning
AWS region: eu-west-1
Application cluster name: jenkins-ci-dev
Terraform backend S3 bucket name: terraform-jenkins-ci-dev
Terraform backend key filename: terraform-jenkins-ci-dev.tfstate
Are you sure to proceed? [y/n] y
- module.vpc
- module.ec2
Running Apply action...
Initializing modules...
- module.vpc
- module.ec2
Initializing the backend...
Initializing provider plugins...
...
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
+ module.ec2.aws_eip.jenkins-eip
id: <computed>
allocation_id: <computed>
association_id: <computed>
domain: <computed>
instance: "${aws_instance.jenkins-ec2.id}"
network_interface: <computed>
private_ip: <computed>
public_ip: <computed>
tags.%: "1"
tags.Name: "jenkins-ec2-dev-eip"
vpc: "true"
+ module.ec2.aws_instance.jenkins-ec2
id: <computed>
ami: "ami-34414d4d"
associate_public_ip_address: "true"
availability_zone: "eu-west-1a"
...
Plan: 12 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
Plan complete. Are you sure to proceed? [y/n] y
data.aws_availability_zones.available: Refreshing state...
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
+ module.ec2.aws_eip.jenkins-eip
id: <computed>
...
module.ec2.aws_eip.jenkins-eip: Creation complete after 2s (ID: eipalloc-ec98d0d1)
module.ec2.aws_volume_attachment.jenkins-data-ebs-attach: Still creating... (10s elapsed)
module.ec2.aws_volume_attachment.jenkins-data-ebs-attach: Still creating... (20s elapsed)
module.ec2.aws_volume_attachment.jenkins-data-ebs-attach: Creation complete after 23s (ID: vai-1099139600)
Apply complete! Resources: 12 added, 0 changed, 0 destroyed.
Check peerings:
Check ping from the Jenkins host to the monitoring’s host:
admin@ip-10-0-4-10:~$ ping 10.0.1.6 -c 1
PING 10.0.1.6 (10.0.1.6) 56(84) bytes of data.
64 bytes from 10.0.1.6: icmp_seq=1 ttl=64 time=85.2 ms
--- 10.0.1.6 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 85.214/85.214/85.214/0.000 ms
And back:
admin@monitonrig-production:~$ ping 10.0.4.10 -c 1
PING 10.0.4.10 (10.0.4.10) 56(84) bytes of data.
64 bytes from 10.0.4.10: icmp_seq=1 ttl=64 time=85.4 ms
--- 10.0.4.10 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 85.440/85.440/85.440/0.000 ms
Done.
Similar posts
- 03/17/2019 Terraform: main commands, state-files, backend storages, and modules in examples on AWS (0)
- 05/03/2017 AWS [China]: начало (0)
- 02/21/2019 OpenVPN: OpenVPN Access Server set up and AWS VPC peering configuration (0)
- 10/31/2015 Terraform: создание проекта и запуск AWS EC2 (0)
Top comments (0)