Hi everyone !
So many thanks for your supports in my last blog post:
https://dev.to/simerca/use-native-gps-with-vuejs-for-ios-and-android-1kk7
Today ! It's a great time for you !
I'will show you how to deploy Postgres database in a minutes with Terraform and AWS, with Primary / Replica and Hourly backup !
So first, whats is Terraform ?
Terraform it's a tool for Infrastructure as Code, it's very simple to use it, for exemple, you just write the configuration of you need and execute it with one command like :
terraform apply
First install Terraform:
https://learn.hashicorp.com/tutorials/terraform/install-cli
Next, start the configuration files
please use my public repo in gitlab:
git clone https://gitlab.com/Simerca/terraform-postgres
Next, see the first file and edit with your own configuration
variables.tf
// Default AWS Access Credentials
variable "access_key" {
default = "****"
}
variable "secret_key" {
default = "*****"
}
variable "region" {
default = "eu-west-3"
}
variable "bucket" {
default = "bucket"
}
variable "aws_ami" {
default = "ami-00798d7180f25aac2"
}
variable "aws_type" {
default = "t2.micro"
}
// Other Configuration
variable "workspace" {
default = "user"
}
variable "password" {
default = "*****"
}
So next see the main file
app-instances.tf
/* Setup our aws provider */
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
resource "aws_instance" "primary_1" {
ami = "${var.aws_ami}"
instance_type = "${var.aws_type}"
security_groups = ["${aws_security_group.swarm.name}"]
key_name = "${aws_key_pair.deployer.key_name}"
connection {
host = self.public_ip
user = "ec2-user"
private_key = "${file("${path.module}/id_rsa.pem")}"
}
provisioner "remote-exec" {
inline = [
"sudo yum update -y",
"sudo yum install git -y",
"sudo amazon-linux-extras install docker -y",
"sudo service docker start",
"sudo curl -L https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose",
"sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose",
"sudo chmod +x /usr/local/bin/docker-compose;",
"sudo docker network create web",
"sudo docker run --restart=unless-stopped --name=postgres -d -p 5432:5432 -e POSTGRES_DB=${var.workspace} -e POSTGRES_USER=${var.workspace} -e POSTGRES_PASSWORD=${var.password} -v $(pwd)/data:/var/lib/postgresql/data postgres",
"sudo docker run -d --restart=unless-stopped -p 3000:3000 -e PW2_ADHOC_CONN_STR=\"postgresql://${var.workspace}:${var.password}@${self.public_ip}:5432/${var.workspace}\" -e PW2_GRAFANAUSER=admin -e PW2_GRAFANAPASSWORD=admin -e PW2_ADHOC_CONFIG=exhaustive -e PW2_ADHOC_CREATE_HELPERS=true --name pw2 cybertec/pgwatch2-postgres",
"sudo docker run -d --restart=unless-stopped --name=postgres_backup -e SCHEDULE='@hourly' -e S3_REGION=${var.region} -e S3_ACCESS_KEY_ID=${var.access_key} -e S3_SECRET_ACCESS_KEY=${var.secret_key} -e S3_BUCKET=${var.bucket} -e POSTGRES_DATABASE=${var.workspace} -e POSTGRES_USER=${var.workspace} -e POSTGRES_HOST=${self.public_ip} -e POSTGRES_PASSWORD=${var.password} -e S3_PREFIX=${var.workspace} -e POSTGRES_EXTRA_OPTS='--schema=public --blobs' schickling/postgres-backup-s3"
]
}
tags = {
Name = "${var.workspace}-primary"
}
}
resource "aws_instance" "replica_1" {
ami = "ami-00798d7180f25aac2"
instance_type = "${var.aws_type}"
security_groups = ["${aws_security_group.swarm.name}"]
key_name = "${aws_key_pair.deployer.key_name}"
connection {
host = self.public_ip
user = "ec2-user"
private_key = "${file("${path.module}/id_rsa.pem")}"
}
provisioner "remote-exec" {
inline = [
"sudo yum update -y",
"sudo yum install git -y",
"sudo amazon-linux-extras install docker -y",
"sudo service docker start",
"sudo curl -L https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose",
"sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose",
"sudo chmod +x /usr/local/bin/docker-compose;",
"sudo docker network create web",
"sudo docker run --restart=unless-stopped --name=postgres -d -p 5432:5432 -e POSTGRES_DB=${var.workspace} -e POSTGRES_USER=${var.workspace} -e POSTGRES_PASSWORD=${var.password} -v $(pwd)/data:/var/lib/postgresql/data postgres",
"sudo docker run -d --restart=unless-stopped -p 3000:3000 -e PW2_ADHOC_CONN_STR=\"postgresql://${var.workspace}:${var.password}@${self.public_ip}:5432/${var.workspace}\" -e PW2_GRAFANAUSER=admin -e PW2_GRAFANAPASSWORD=admin -e PW2_ADHOC_CONFIG=exhaustive -e PW2_ADHOC_CREATE_HELPERS=true --name pw2 cybertec/pgwatch2-postgres",
"sudo docker run -d --restart=unless-stopped --name=postgres_backup -e SCHEDULE='@hourly' -e S3_REGION=${var.region} -e S3_ACCESS_KEY_ID=${var.access_key} -e S3_SECRET_ACCESS_KEY=${var.secret_key} -e S3_BUCKET=${var.bucket} -e POSTGRES_DATABASE=${var.workspace} -e POSTGRES_USER=${var.workspace} -e POSTGRES_HOST=${self.public_ip} -e POSTGRES_PASSWORD=${var.password} -e S3_PREFIX=${var.workspace} -e POSTGRES_EXTRA_OPTS='--schema=public --blobs' schickling/postgres-restore-s3"
]
}
tags = {
Name = "${var.workspace}-replica"
}
}
So ! Try to understand how it works
Step 1 :
Use the variable to set the configuration, very simple to do this.
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
You can use environment variable with the schema :
TF_VAR_access_key=XXXXXX terraform apply
Step 2:
Launch instance and basic configuration with set of command lines
resource "aws_instance" "primary_1" {
ami = "${var.aws_ami}"
instance_type = "${var.aws_type}"
security_groups = ["${aws_security_group.swarm.name}"]
key_name = "${aws_key_pair.deployer.key_name}"
connection {
host = self.public_ip
user = "ec2-user"
private_key = "${file("${path.module}/id_rsa.pem")}"
}
provisioner "remote-exec" {
inline = [
"sudo yum update -y",
"sudo yum install git -y",
"sudo amazon-linux-extras install docker -y",
"sudo service docker start",
"sudo curl -L https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose",
"sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose",
"sudo chmod +x /usr/local/bin/docker-compose;",
"sudo docker network create web",
"sudo docker run --restart=unless-stopped --name=postgres -d -p 5432:5432 -e POSTGRES_DB=${var.workspace} -e POSTGRES_USER=${var.workspace} -e POSTGRES_PASSWORD=${var.password} -v $(pwd)/data:/var/lib/postgresql/data postgres",
"sudo docker run -d --restart=unless-stopped -p 3000:3000 -e PW2_ADHOC_CONN_STR=\"postgresql://${var.workspace}:${var.password}@${self.public_ip}:5432/${var.workspace}\" -e PW2_GRAFANAUSER=admin -e PW2_GRAFANAPASSWORD=admin -e PW2_ADHOC_CONFIG=exhaustive -e PW2_ADHOC_CREATE_HELPERS=true --name pw2 cybertec/pgwatch2-postgres",
"sudo docker run -d --restart=unless-stopped --name=postgres_backup -e SCHEDULE='@hourly' -e S3_REGION=${var.region} -e S3_ACCESS_KEY_ID=${var.access_key} -e S3_SECRET_ACCESS_KEY=${var.secret_key} -e S3_BUCKET=${var.bucket} -e POSTGRES_DATABASE=${var.workspace} -e POSTGRES_USER=${var.workspace} -e POSTGRES_HOST=${self.public_ip} -e POSTGRES_PASSWORD=${var.password} -e S3_PREFIX=${var.workspace} -e POSTGRES_EXTRA_OPTS='--schema=public --blobs' schickling/postgres-backup-s3"
]
}
tags = {
Name = "${var.workspace}-primary"
}
}
Be carreful to this line
private_key = "${file("${path.module}/id_rsa.pem")}"
Yoo need to create new SSL key for access to your instances
ssh-keygen -t rsa -N "" -b 2048 -C "assignment"
ssh-keygen -y -f ~/.ssh/id_rsa.pem > ~/.ssh/id_rsa.pub
After,
The command line execute command in order.
- Install dependencies
- Install docker
- Docker run some containers
We run this set twice, 1 primary, 1 replica (master/slave)
Step 3
Open security group
In other file we have :
security-group.tf
/* Default security group */
resource "aws_security_group" "swarm" {
name = "swarm-group-${var.workspace}"
description = "Default security group that allows inbound and outbound traffic from all instances in the VPC"
ingress {
from_port = "0"
to_port = "0"
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
self = true
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 5432
to_port = 5432
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
....
Here we open the needed port to access on our Postgres
Lets Try !
First try your configuration (it's just a test)
terraform plan
Next, you can use this exemple command line to launch Postgres instance inside your AWS account :
TF_VAR_access_key=XXXX TF_VAR_secret_key=XXXX TF_VAR_user=DBUser TF_VAR_password=SomePassword terraform apply
This command launch:
- 2 Ec2 instances (Primary /Replica)
- 2 PostgresSQL Database on port 5432 and with the user : DBUser and password : SomePassword
- 2 Monitoring container with Grafana
- 2 Hourly backup on S3 Bucket(Primary to Replica)
and when you want to destroy all instances
terraform destroy
very simple no ?
Some pub ?
If you don't want to use your mind, you can try my deployment tool, to deploy Postgres with replications and Monitoring in a minutes
Dont forget to talk with me !
Thanks for your time!
Top comments (3)
This is really cool! Do you also know a nice way to create and manage a database in a generic kubernetes cluster?
Hi ! Thank for your feedback, I not work on Kubernetes for now, i can't help you sorry :(
If you love cat, clap your hands