Introduction
Hello folks!
Today I would like to show you how you can manage your application's infrastructure needs and deployment lifecycle with a single CLI called: Furnace.
Let's dive right into it, shall we?
What is Furnace?
To answer that question, we must first understand what AWS CloudFormation and GCP Deployment Manager is.
In short:
AWS CloudFormation
AWS CloudFormation provides a means of describing an infrastructure using a YAML or JSON based configuration entity. It creates a stack which can handle resources in a nice, grouped concise way. Let's take a look at an example (a lot more can be found here: CloudFormation template Examples:
Parameters:
KeyName:
Description: The EC2 Key Pair to allow SSH access to the instance
Type: 'AWS::EC2::KeyPair::KeyName'
Resources:
Ec2Instance:
Type: 'AWS::EC2::Instance'
Properties:
SecurityGroups:
- !Ref InstanceSecurityGroup
- MyExistingSecurityGroup
KeyName: !Ref KeyName
ImageId: ami-7a11e213
InstanceSecurityGroup:
Type: 'AWS::EC2::SecurityGroup'
Properties:
GroupDescription: Enable SSH access via port 22
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '22'
ToPort: '22'
CidrIp: 0.0.0.0/0
A lot is going on here, but after a little reading we'll get used to it. The above template creates a basic EC2 instance, with a security group allowing SSH access based on a key. It defines a single parameter called KeyName
which we can provide on the CLI later on. The KeyName is the name of SSH key that the created EC2 instance will use for access. It must already exist in the used account.
Notice this: !Ref KeyName
. This is a function call in the template. This, and many more, exist which make the CloudFormation template so powerful. There are conditionals, ifs, and repeaters. Maps, arrays and static variables. It allows the creation of dynamic infrastructure templates which create purpose solutions for different scenarios.
GCP Deployment Manager
GCP has a similar service called Deployment Manager. Basically does the same thing but with a different mind set. It uses Jinja2 templates for creating dynamic infrastructure. It's a bit more powerful since it allows for a full Python support in the template file or in an actual Python file. Also defines Schema files which bind parameters. So, you could create a yaml template which uses Jinja templates with variables and then define a json or yaml file which contains said variables and automate the whole process without having to enter anything into the cli.
Let's take a look at an example (There are a lot located here: GCP Deployment Manager Templates):
The main YAML driver:
imports:
- path: cloudbuild.jinja
resources:
- name: build
type: cloudbuild.jinja
properties:
resourceToList: deployments
The JINJA template containing the logic and using the yaml's property:
resources:
- name: build-something
action: gcp-types/cloudbuild-v1:cloudbuild.projects.builds.create
metadata:
runtimePolicy:
- UPDATE_ALWAYS
properties:
steps:
- name: gcr.io/cloud-builders/gcloud
args:
- deployment-manager
- {{ properties['resourceToList'] }}
- list
timeout: 120s
It can get pretty complicated especially with multiple files laying around. It takes a bit of learning to get it right and understand the connection between the templates.
Furnace
So where does furnace fit into all of this?
Furnace provides a simple way of dealing with these deployments. It basically gives a CRUD cli for either of these using a minimal amount of configuration in the process. It's lightweight, behind the scenes tool to deploy an application to these environments. For AWS it's using CodeDeploy to achieve this, for GCP it's using the Deployment Manager because frankly it's dead simple to define a code deployment in GCP land.
Let's take a look at both of these:
AWS CodeDeploy
Let's say you have a single EC2 instance you would like to deploy a web-app to. The simplest CloudFormation template for that would look something like this:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "AWS CloudFormation Sample Template EC2InstanceWithSecurityGroupSample: Create an Amazon EC2 instance running the Amazon Linux AMI. The AMI is chosen based on the region in which the stack is run. This example creates an EC2 security group for the instance to give you SSH access. **WARNING** This template creates an Amazon EC2 instance. You will be billed for the AWS resources used if you create a stack from this template.",
"Parameters": {
"KeyName": {
"Description": "Name of an existing EC2 KeyPair to enable SSH access to the instance",
"Type": "AWS::EC2::KeyPair::KeyName",
"ConstraintDescription": "must be the name of an existing EC2 KeyPair.",
"Default": "NonExisting"
},
"SSHLocation": {
"Description": "The IP address range that can be used to SSH to the EC2 instances",
"Type": "String",
"MinLength": "9",
"MaxLength": "18",
"Default": "0.0.0.0/0",
"AllowedPattern": "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})",
"ConstraintDescription": "must be a valid IP CIDR range of the form x.x.x.x/x."
}
},
"Resources": {
"InstanceProfile": {
"Type": "AWS::IAM::InstanceProfile",
"Properties": {
"Path": "/",
"Roles": [
{
"Ref": "Role"
}
]
}
},
"Role": {
"Type": "AWS::IAM::Role",
"Properties": {
"Path": "/",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"ec2.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
}
}
},
"EC2Instance": {
"Type": "AWS::EC2::Instance",
"Properties": {
"IamInstanceProfile" : { "Ref" : "InstanceProfile" },
"InstanceType": "t2.micro",
"SecurityGroups": [
{
"Ref": "InstanceSecurityGroup"
}
],
"KeyName": {
"Ref": "KeyName"
},
"ImageId": "ami-0cc293023f983ed53",
"Tags": [
{
"Key": "fu_stage",
"Value": {
"Ref": "AWS::StackName"
}
}
],
"UserData": {
"Fn::Base64": {
"Fn::Join": [
"\n",
[
"#!/bin/bash -v",
"sudo yum -y update",
"sudo yum -y install ruby wget",
"cd /home/ec2-user/",
"wget https://aws-codedeploy-eu-central-1.s3.amazonaws.com/latest/install",
"chmod +x ./install",
"sudo ./install auto",
"sudo service codedeploy-agent start"
]
]
}
}
}
},
"InstanceSecurityGroup": {
"Type": "AWS::EC2::SecurityGroup",
"Properties": {
"GroupDescription": "Enable SSH access via port 22",
"SecurityGroupIngress": [
{
"IpProtocol": "tcp",
"FromPort": "22",
"ToPort": "22",
"CidrIp": {
"Ref": "SSHLocation"
}
},
{
"IpProtocol": "tcp",
"FromPort": "80",
"ToPort": "80",
"CidrIp": "0.0.0.0/0"
}
]
}
}
},
"Outputs": {
"InstanceId": {
"Description": "InstanceId of the newly created EC2 instance",
"Value": {
"Ref": "EC2Instance"
}
},
"AZ": {
"Description": "Availability Zone of the newly created EC2 instance",
"Value": {
"Fn::GetAtt": [
"EC2Instance",
"AvailabilityZone"
]
}
},
"PublicDNS": {
"Description": "Public DNSName of the newly created EC2 instance",
"Value": {
"Fn::GetAtt": [
"EC2Instance",
"PublicDnsName"
]
}
},
"PublicIP": {
"Description": "Public IP address of the newly created EC2 instance",
"Value": {
"Fn::GetAtt": [
"EC2Instance",
"PublicIp"
]
}
}
}
}
We need to do a lot of things here. The UserData section sets up CodeDeploy daemon for us so that AWS can deploy code to this instance. We need a security group which defines SSH for debugging the instance and open HTTP access for the application. We define two parameters, the keyname and the SSH location if we would like to define a custom one.
Notice the tag fu_stage
. This is very important because that's the tag furnace uses later on to find instances to deploy code to.
Like I said, Furnace needs a little bit of configuration in order to find the right template. You can have as many templates as you want, this is how it goes:
Create a folder structure like this:
.
├── stacks
│ ├── simple.template
│ └── mystack.yaml
└── .mystack.furnace
Where .mystack.furnace
contains this single line:
stacks/mystack.yaml
This yaml file is Furnace's configuration file and might contain something like this:
main:
stackname: MyStack
spinner: 1
plugins:
plugin_path: "./plugins"
aws:
code_deploy_role: CodeDeployServiceRole
region: us-east-1
template_name: simple.template
app_name: stack-app
code_deploy:
# Only needed in case S3 is used for code deployment
code_deploy_s3_bucket: furnace_code_bucket
# The name of the zip file in case it's on a bucket
code_deploy_s3_key: furnace_deploy_app
# In case a Git Repository is used for the application, define these two settings
git_account: Skarlso/furnace-codedeploy-app
git_revision: b80ea5b9dfefcd21e27a3e0f149ec73519d5a6f1
You can check out in the README.md what all of these do, but for now the important bits are, the name of the stack it creates, the code_deploy options which use either S3 or Git to find the to be deployed code and the template_name which will be used to find the template.
We then simply say:
furnace-aws create mystack
... which will go on and create the stack and wait for it to complete.
If everything goes okay we can view our stack with furnace-aws status mystack
or delete it with furnace-aws delete mystack
or update it via furnace-aws update mystack
. Updating is done with ChangeSets so you will be able to view what will be updated. And if you define a rolling update then it will be rolled out by the instance count you defined.
Once your infrastructure is completed, you can deploy your code to it with:
furnace-aws push
This will use said settings to look for the code and deploy it. If everything goes well, you should be able to access your application via the public URL of the EC2 instance ( in this simple case ). If you would like to push out a new version just run the same command again.
GCP Application Deployment
For GCP it's the same expect you don't have a push command because GCP takes care of the application versioning and deployment. Your application should live on a GCP defined git server ( or live on a private on or bind it to github, that's up to you ). The directory structure and configuration options are almost the same, but for gcp it furnace's config looks like this:
main:
project_name: test-123
spinner: 1
gcp:
template_name: google_template.yaml
stack_name: test2-stack
A lot simpler. GCP takes care of the rest with Jinja and Yaml and what not. For a more complex example look here: Furnace GCP.
Basically it uses a startup_script.sh to deploy the right code like this:
- key: startup-script-url
value: gs://{{ properties["bucket"] }}/startup-script.sh
The startup script is located here: startup_script.sh. It's a bit complex, but the two important bits are these:
export HOME=/root
git config --global credential.helper gcloud.sh
git clone https://source.developers.google.com/p/$PROJECTID/r/<YOUR_REPO_HERE> /opt/app
and
# Configure supervisor to start gunicorn inside of our virtualenv and run the
# application.
cat >/etc/supervisor/conf.d/python-app.conf << EOF
[program:pythonapp]
directory=/opt/app/7-gce
command=/opt/app/7-gce/env/bin/gunicorn main:app --bind 0.0.0.0:8080
autostart=true
autorestart=true
user=pythonapp
# Environment variables ensure that the application runs inside of the
# configured virtualenv.
environment=VIRTUAL_ENV="/opt/app/env/7-gce",PATH="/opt/app/7-gce/env/bin",\
HOME="/home/pythonapp",USER="pythonapp"
stdout_logfile=syslog
stderr_logfile=syslog
EOF
supervisorctl reread
supervisorctl update
Which creates a supervisord service that will run this python application. This could be anything that runs continuously, like nginx, or caddy or whatever. The point is that this script is used to define how to deploy your application and where it's located at.
To run this, again, you simply call:
furnace-gcp create teststack
Where .teststack.furnace
contains this single line: stacks/gcp_furnace_config.yaml
.
And the configuration structure is this:
.
├── stacks
│ ├── gcp_furnace_config.yaml
│ ├── google_template.yaml
│ ├── simple_template.jinja
│ └── simple_template.jinja.schema
└── .teststac.furnace
Plugins
Furnace also provides Plugins at two stages. Before and After stack creation. Plugins can be written in any language which supports gRPC. There are examples for Go plugins here: Furnace Go Plugins and example for Python plugin here: Furnace Python Plugin.
Conclusion
Huh. So this might seem like quite a lot, but the important bit here is that Furnace is not in your way. You don't vendor lock yourself to furnace by using it like with terraform. Furnace is using existing deployment management services for either AWS or GCP. If you decide to not use furnace, your configurations and stacks will still remain. You can simply switch to boto or gcloud. Furnace also provides multiple binaries for AWS and GCP and DigitalOcean. Thus it's really small compared to terraform which is around 110MB. Furnace is around 16MB for AWS and around 10 for GCP.
Thank you for reading!
And go, check out Furnace.
Gergely.
Top comments (0)