DEV Community

Arseny Zinchenko
Arseny Zinchenko

Posted on • Originally published at rtfm.co.ua on

AWS Elastic Kubernetes Service: a cluster creation automation, part 1 – CloudFormation

The task is: create automation to roll out an AWS Elastic Kubernetes Service cluster from scratch.

Will use:

  • Ansible: to automate CloudFormation stack creation and to execute eksctl with necessary parameters
  • CloudFormation with NestedStacks: to create an infrastructure – VPC, subnets, SecurityGroups, IAM-roles, etc
  • eksctl: to create a cluster itself using resources created by CloudFormation

The thinking is :

  1. Ansible will use the cloudformation module to create an infrastructure
  2. by using Outputs of the stack created by CloudFormation – Ansible will generate a config file for the eksctl
  3. Ansible calls eksctl passing the config-file and will create a cluster

The eksctl was chosen at first because of lack of time, and secondly – because it uses CloudFormation under the hood, which is used in my project for a long time, so all our infrastructure will stay homogeneous state.

Ansible will be running fro ma Jenkins job by using a Docker image with AWS CLI, Ansible and eksctl.

Actually, do not consider this post as some kind of a “Best Practice” for such automation, instead – it’s more like a Proof of Concept, and more is an example of how some vague idea in a head became in a real working code and services. Which tools exactly to use – Terraform или CloudFormation, kops or eksctl is a secondary question.

Also, there are two modules for Ansible to make work with Kubernetes easier – k8s and kubectl, but they both have statuses preview and community so I’ll do not use them here (yet).

The post is really long, so it’s divided into two parts:

  • in this one, the first one, we will start writing a CloudFromation template
  • in the second one – will start writing Ansible playbook and roles to run CloudFormation and eksctl

I hope there are not too many inaccuracies, but still, they can be, as this was written during a few days with repeated corrections and revamps, but it’s described step by step, so the general idea must be visible enough.

All the resulted files after writing this post are available in the eksctl-cf-ansible Github repository. The link here is to the branch with an exact copy of the code below.

The second part – AWS Elastic Kubernetes Service: a cluster creation automation, part 2 – Ansible, eksctl.

CloudFormation stacks

So, let’s begin with the CloudFormation stack.

We need to create:

  • 1 VPC
  • two public subnets for Application Load Balancers, Bastion hosts, Internet Gateways
  • two private subnets for Kubernetes Worker Nodes EC2, NAT Gateways

EKS AMI for Kubernetes Worker Nodes eksctl will choose automatically, but you can find the whole list here>>>.

Will use CloudFormation Nested Stacks here (see the AWS: CloudFormation – Nested Stacks and stacks parameters Import/Export for more details):

  1. the “ Root-stack “, template file eks-root.json – will describe stacks to be created, determines parameters, etc
    1. the “ Region-stack “, template file eks-region-networking.json:
    2. one VPC
    3. Internet Gateway
      1. Internet Gateway Association
    4. the “ AvailabilityZones-stack “, a template file eks-azs-networking.json – all resources will be duplicated over two different AvailabilityZones of a region:
    5. one public subnet
    6. one private subnet
    7. RouteTable for the public subnet
      1. a Route into the 0.0.0.0/0 network via an Internet Gateway
      2. and SubnetRouteTableAssociation to attach this RouteTable to a public subnet in this AvailabilityZone
    8. RouteTable for the private subnet
      1. a Route into the 0.0.0.0/0 network via a NAT Gateway
      2. SubnetRouteTableAssociation to attach this RouteTable to a private subnet in this AvailabilityZone
    9. NAT Gateway
      1. Elastic IP for the NAT Gateway

Go ahead with the root-stack template.

The Root stack

The first template will be used by the root-stack to create all other stacks.

Create directories for a future Ansible role:

$ mkdir -p roles/cloudformation/{tasks,files,templates}
Enter fullscreen mode Exit fullscreen mode

In the roles/cloudformation/files/ directory create a new file eks-root.json – this will be our root-template:

$ cd roles/cloudformation/files/
$ touch eks-root.json
Enter fullscreen mode Exit fullscreen mode
Parameters

It’s a good idea to think about your future IP-addresses block that will be used in your project. At least you need to avoid using overlapping network blocks to prevent VPN Peering issues.

The second thing to consider is a whole networking model for your cluster and network plugin to use.

By default, AWS Elastic Kubernetes Service uses the CNI (Container Network Interface) plugin which allows using a Worker Node ЕС2 network interface (ENI – Elastic Network Interface). By using this plugin – Kubernetes will allocate IP addresses from a VPC pool to pods created, see the amazon-vpc-cni-k8s and Pod Networking (CNI).

This solution has some advantages and disadvantages, check the great overview from the Weave Net — AWS and Kubernetes Networking Options and Trade-offs, and read about other plugins in the Kubernetes documentation – Cluster Networking.

Also, worth to check the VPC and Subnet Sizing document.

For now, let’s add only the 10.0.0.0/16 block for the VPC – later it will be divided to 4 subnets:

{
  "AWSTemplateFormatVersion": "2010-09-09",
  "Description": "AWS CloudFormation stack for Kubernetes cluster",

  "Parameters": {

    "VPCCIDRBlock": {
      "Description": "VPC CidrBlock",
      "Type": "String",
      "Default": "10.0.0.0/16"
    }

  },
Enter fullscreen mode Exit fullscreen mode

Subnets will be the next:

  • one public in an AvailabilityZone 1-А, /20, 4094 addresses
  • one private in an AvailabilityZone 1-А, /20, 4094 addresses
  • one public in an AvailabilityZone 1-В, /20, 4094 addresses
  • one private in an AvailabilityZone 1-В, /20, 4094 addresses

The ipcalc can be used here:

$ ipcalc 10.0.0.0/16 --s 4094 4094 4094 4094 | grep Network | cut -d" " -f 1,4 | tail -4
Network: 10.0.0.0/20
Network: 10.0.16.0/20
Network: 10.0.32.0/20
Network: 10.0.48.0/20
Enter fullscreen mode Exit fullscreen mode

4094 address must be enough for all EС2 instances and pods.

During this post wring found the best subnets calculator here http://www.subnetmask.info.

Also, add an EKSClusterName parameter – we will pass a cluster name from Ansible here to create necessary CloudFormation tags:

...
    "EKSClusterName": {
      "Description": "EKS cluster name",
      "Type": "String"
    } 
...
Enter fullscreen mode Exit fullscreen mode

The Network Region stack

Now we can create a template for the second stack. Let’s call it eks-region-networking.json.

VPC

In this template, we will describe our VPC, and from the root-template will pass a parameter with the VPC CIDR here, and back to the root – via Outputs will pass back the ID of the VPC created:

{
  "AWSTemplateFormatVersion" : "2010-09-09",
  "Description" : "AWS CloudFormation Region Networking stack for Kubernetes cluster",

  "Parameters" : {

    "VPCCIDRBlock": {
      "Description": "VPC CidrBlock",
      "Type": "String"
    },
    "EKSClusterName": {
      "Description": "EKS cluster name",
      "Type": "String"
   }

  },

  "Resources" : {

    "VPC" : {
      "Type" : "AWS::EC2::VPC",
      "Properties" : {
        "CidrBlock" : { "Ref": "VPCCIDRBlock" },
        "EnableDnsHostnames": true,
        "EnableDnsSupport": true,
        "Tags" : [
          {
            "Key" : "Name",
            "Value" : { "Fn::Join" : [ "-", [ {"Ref" : "AWS::StackName"}, "vpc"] ] } },
          {
            "Key" : { "Fn::Join" : [ "", [ "kubernetes.io/cluster/", {"Ref" : "EKSClusterName"}] ] },
            "Value" : "owned"
          }
        ]
      }
    }

  },

  "Outputs" : {

    "VPCID" : {
      "Description" : "EKS VPC ID",
      "Value" : { "Ref" : "VPC" }
    }

  }
}
Enter fullscreen mode Exit fullscreen mode

Go back to the root template to add a first nested stack creation.

VPC ID will be taken from the Outputs of the network region stack and will be disclosed via root’s Outputs to make it available for Ansible to grab it to create a variable which will be used later for the eksctl config-file.

At this moment the whole root template has to look like the following:

{
  "AWSTemplateFormatVersion": "2010-09-09",
  "Description": "AWS CloudFormation stack for Kubernetes cluster",
  "Parameters": {
    "VPCCIDRBlock": {
      "Description": "VPC CidrBlock",
      "Type": "String",
      "Default": "10.0.0.0/16"
    },
    "EKSClusterName": {
      "Description": "EKS cluster name",
      "Type": "String"
   }
  },
  "Resources": {
    "RegionNetworkStack": {
      "Type": "AWS::CloudFormation::Stack",
      "Properties": {
        "TemplateURL": "eks-region-networking.json",
        "Parameters": {
          "VPCCIDRBlock": { "Ref": "VPCCIDRBlock" },
          "EKSClusterName": { "Ref": "EKSClusterName"},
        }
      }
    }
  },
  "Outputs": {
    "VPCID" : {
      "Description" : "EKS VPC ID",
      "Value" : { "Fn::GetAtt": ["RegionNetworkStack", "Outputs.VPCID"] }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Create an S3 bucket in a region planned to use:

$ aws --profile arseniy --region eu-west-2 s3api create-bucket --bucket eks-cloudformation-eu-west-2 --region eu-west-2 --create-bucket-configuration LocationConstraint=eu-west-2
Enter fullscreen mode Exit fullscreen mode

In a Production setup, it will great to have S3 Versioning enabled to have a backup and history (although all the templates will be stored in a Gitbuh repository).

Enable it:

$ aws --region eu-west-2 --profile arseniy s3api put-bucket-versioning --bucket eks-cloudformation-eu-west-2 --versioning-configuration Status=Enabled
Enter fullscreen mode Exit fullscreen mode

Pack the eks-root.json and eks-region-networking.json to the AWS S3 and save a resulted file to the /tmp as packed-eks-stacks.json:

$ cd roles/cloudformation/files/
$ aws --profile arseniy --region eu-west-2 cloudformation package --template-file eks-root.json --output-template /tmp/packed-eks-stacks.json --s3-bucket eks-cloudformation-eu-west-2 --use-json
Enter fullscreen mode Exit fullscreen mode

Deploy the stack:

$ aws --profile arseniy --region eu-west-2 cloudformation deploy --template-file /tmp/packed-eks-stacks.json --stack-name eks-dev
Waiting for changeset to be created.
Waiting for stack create/update to complete
Successfully created/updated stack - eks-dev
Enter fullscreen mode Exit fullscreen mode

Check it:

The first child stack is created, VPC created – all good so far.

Internet Gateway

Add an Internet Gateway and VPCGatewayAttachment, so the Resources block of the region-stack will be:

...
  "Resources" : {
    "VPC" : {
      "Type" : "AWS::EC2::VPC",
      "Properties" : {
        "CidrBlock" : { "Ref": "VPCCIDRBlock" },
        "EnableDnsHostnames": true,
        "EnableDnsSupport": true,
        "Tags" : [
          {
            "Key" : "Name",
            "Value" : { "Fn::Join" : [ "-", [ {"Ref" : "AWS::StackName"}, "vpc"] ] } },
          {
            "Key" : { "Fn::Join" : [ "", [ "kubernetes.io/cluster/", {"Ref" : "EKSClusterName"}] ] },
            "Value" : "owned"
          }
        ]
      }
    },
    "InternetGateway" : {
      "Type" : "AWS::EC2::InternetGateway",
      "Properties" : {
        "Tags" : [
          {"Key" : "Name", "Value" : { "Fn::Join" : [ "-", [ {"Ref" : "AWS::StackName"}, "igw"] ] } }
        ]
      }
    },

    "AttachGateway" : {
       "Type" : "AWS::EC2::VPCGatewayAttachment",
       "Properties" : {
         "VpcId" : { "Ref" : "VPC" },
         "InternetGatewayId" : { "Ref" : "InternetGateway" }
       }
    }
  },
...
Enter fullscreen mode Exit fullscreen mode

In its Outputs add passing the InternetGateway ID back to the root stack, from where it will be passed to the Network AvailabilityZones stack to create future RouteTables for public subnets:

...
  "Outputs" : {
    "VPCID" : {
      "Description" : "EKS VPC ID",
      "Value" : { "Ref" : "VPC" }
    },
    "IGWID" : {
      "Description" : "InternetGateway ID",
      "Value" : { "Ref" : "InternetGateway" }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

And it’s time to start writing the Network AvailabilityZones stack template.

Network AvailabilityZones stack

Now, we need to specify resources to be duplicated over two AvailabilityZones.

These include:

  1. by one public subnet
  2. by one private subnet
  3. RouteTable for public subnets
    1. with a Route to the 0.0.0.0/0 network via an Internet Gateway
    2. and a SubnetRouteTableAssociation to attach the RouteTable to a public subnet in this AvailabilityZone
  4. RouteTable for private subnets
    1. with a Route to the 0.0.0.0/0 network via a NAT Gateway
    2. and a SubnetRouteTableAssociation to attach the RouteTable to a private subnet in this AvailabilityZone
  5. NAT Gateway
    1. Elastic IP for the NAT Gateway

The main question here is how to choose AvailabilityZones for those stacks, as some resources, like AWS::EC2::Subnet needs to have AvailabilityZone specified.

The possible solution is to use the Fn::GetAZs ColudFormation function, which will be called from the root stack to get all AvailabilityZones of the region used for the cluster, and then they will be passed to our NetworkAvailabilityZones-stacks.

Most regions have three AvailabilityZones, but in this case, only two will be used (fair enough for fault-tolerant).

Let’s begin with subnets – by one public and one private in both AvailabilityZones.

In this stack we need to pass a few new parameters:

  • VPC ID from the region stack
  • public subnet CIDR block
  • private subnet CIDR block
  • AvailabilityZone to create resources in
  • Internet Gateway ID from the region stack to use for RouteTables

Create a new template file, call it eks-azs-networking.json.

Parameters

Add parameters here:

{
  "AWSTemplateFormatVersion" : "2010-09-09",
  "Description" : "AWS CloudFormation AvailabilityZones Networking stack for Kubernetes cluster",
  "Parameters" : {
    "VPCID": {
      "Description": "VPC for resources",
      "Type": "String"
    },
    "EKSClusterName": {
      "Description": "EKS cluster name",
      "Type": "String"
    },
    "PublicSubnetCIDR": {
      "Description": "PublicSubnetCIDR",
      "Type": "String"
    },
    "PrivateSubnetCIDR": {
      "Description": "PrivateSubnetCIDR",
      "Type": "String"
    },
    "AZ": {
      "Description": "AvailabilityZone for resources",
      "Type": "String"
    },
    "IGWID": {
      "Description": "InternetGateway for PublicRoutes",
      "Type": "String"
    }
  },
Enter fullscreen mode Exit fullscreen mode
Subnets

Add the Resources section with two resources – the public and private subnets:

...
"Resources" : {

  "PublicSubnet" : {
    "Type" : "AWS::EC2::Subnet",
    "Properties" : {
      "VpcId" : { "Ref" : "VPCID" },
      "CidrBlock" : {"Ref" : "PublicSubnetCIDR"},
      "AvailabilityZone" : { "Ref": "AZ" },
      "Tags" : [
        {
          "Key" : "Name",
          "Value" : { "Fn::Join" : [ "-", [ {"Ref" : "AWS::StackName"}, "public-net", { "Ref": "AZ" } ] ] }
        },
        {
          "Key" : { "Fn::Join" : [ "", [ "kubernetes.io/cluster/", {"Ref" : "EKSClusterName"}] ] },
          "Value" : "shared"
        },
        {
          "Key" : "kubernetes.io/role/elb",
          "Value" : "1"
        }
      ]
    }
  },

  "PrivateSubnet" : {
    "Type" : "AWS::EC2::Subnet",
    "Properties" : {
      "VpcId" : { "Ref" : "VPCID" },
      "CidrBlock" : {"Ref" : "PrivateSubnetCIDR"},
      "AvailabilityZone" : { "Ref": "AZ" },
      "Tags" : [
        {
          "Key" : "Name",
          "Value" : { "Fn::Join" : [ "-", [ {"Ref" : "AWS::StackName"}, "private-net", { "Ref": "AZ" } ] ] }
        },
        {
          "Key" : { "Fn::Join" : [ "", [ "kubernetes.io/cluster/", {"Ref" : "EKSClusterName"}] ] },
          "Value" : "shared"
        },
        {
          "Key" : "kubernetes.io/role/internal-elb",
          "Value" : "1"
        }
      ]
    }
  }

},
Enter fullscreen mode Exit fullscreen mode

Pay attention in the "kubernetes.io/role/elb" tag for the public subnet and "kubernetes.io/role/internal-elb" for the private one – they will be needed later for the ALB Ingress controller.

On the Outputs add subnets-ID to pass them to the root stack to make them available for Ansible to create a eksctl config-file for a future cluster, and add an AvailabilityZone here as well:

...
  "Outputs" : {

    "StackAZ" : {
      "Description" : "Stack location",
      "Value" : { "Ref" : "AZ" }
    },
    "PublicSubnetID" : {
      "Description" : "PublicSubnet ID",
      "Value" : { "Ref" : "PublicSubnet" }
    },
    "PrivateSubnetID" : {
      "Description" : "PrivateSubnet ID",
      "Value" : { "Ref" : "PrivateSubnet" }
    }

  }
}
Enter fullscreen mode Exit fullscreen mode

Go back to the root template and add two more resources to create – one stack per each AvailabilityZone, so its Resources has to look like the next:

...
  "Resources": {
    "RegionNetworkStack": {
      "Type": "AWS::CloudFormation::Stack",
      "Properties": {
        "TemplateURL": "eks-region-networking.json",
        "Parameters": {
          "VPCCIDRBlock": { "Ref": "VPCCIDRBlock" },
          "EKSClusterName": { "Ref": "EKSClusterName"}
        }
      }
    },
    "AZNetworkStackA": {
      "Type": "AWS::CloudFormation::Stack",
      "Properties": {
        "TemplateURL": "eks-azs-networking.json",
        "Parameters": {
          "VPCID": { "Fn::GetAtt": ["RegionNetworkStack", "Outputs.VPCID"] },
          "AZ": { "Fn::Select": [ "0", { "Fn::GetAZs": "" } ] },
          "IGWID": { "Fn::GetAtt": ["RegionNetworkStack", "Outputs.IGWID"] },
          "EKSClusterName": { "Ref": "EKSClusterName"},
          "PublicSubnetCIDR": "10.0.0.0/20",
          "PrivateSubnetCIDR": "10.0.32.0/20"
        }
      }
    },
    "AZNetworkStackB": {
      "Type": "AWS::CloudFormation::Stack",
      "Properties": {
        "TemplateURL": "eks-azs-networking.json",
        "Parameters": {
          "VPCID": { "Fn::GetAtt": ["RegionNetworkStack", "Outputs.VPCID"] },
          "AZ": { "Fn::Select": [ "1", { "Fn::GetAZs": "" } ] },
          "IGWID": { "Fn::GetAtt": ["RegionNetworkStack", "Outputs.IGWID"] },
          "EKSClusterName": { "Ref": "EKSClusterName"},
          "PublicSubnetCIDR": "10.0.16.0/20",
          "PrivateSubnetCIDR": "10.0.48.0/20"
        }
      }
    }
  },
...
Enter fullscreen mode Exit fullscreen mode

The Internet Gateway ID will be taken from the Outputs of the region-stack and will be passed via Parameters to the AZNetworkStackА и AZNetworkStackB to use for a public subnets Route.

CIDR can be hardcoded for now – later we will use the Mappings.

So, in the code above:

  • Fn::GetAZs"VPCID": { "Fn::GetAtt": ["RegionNetworkStack", "Outputs.VPCID"] } — pass VPD ID from the region stack to AZ-stacks
  • "AZ": { "Fn::Select": ["0", { "Fn::GetAZs": "" }] } — choose the first element (index “0“) from the AvailabilityZones list, and the second element (index “1“) for the second stack
  • PublicSubnetCIDR and PrivateSubnetCIDR are hardcoded

Also, add subnets-ID from AvailabilityZones-stack to the root’s stack Outputs to make the accessible for Ansible for the eksctl parameters:

...
  "Outputs": {
    "VPCID" : {
      "Description" : "EKS VPC ID",
      "Value" : { "Fn::GetAtt": ["RegionNetworkStack", "Outputs.VPCID"] }
    },
    "AStackAZ" : {
      "Description" : "Stack location",
      "Value" : { "Fn::GetAtt": ["AZNetworkStackA", "Outputs.StackAZ"] }
    },
    "APublicSubnetID" : {
      "Description" : "PublicSubnet ID",
      "Value" : { "Fn::GetAtt": ["AZNetworkStackA", "Outputs.PublicSubnetID"] }
    },
    "APrivateSubnetID" : {
      "Description" : "PrivateSubnet ID",
      "Value" : { "Fn::GetAtt": ["AZNetworkStackA", "Outputs.PrivateSubnetID"] }
    },
    "BStackAZ" : {
      "Description" : "Stack location",
      "Value" : { "Fn::GetAtt": ["AZNetworkStackB", "Outputs.StackAZ"] }
    },
    "BPublicSubnetID" : {
      "Description" : "PublicSubnet ID",
      "Value" : { "Fn::GetAtt": ["AZNetworkStackB", "Outputs.PublicSubnetID"] }
    },
    "BPrivateSubnetID" : {
      "Description" : "PrivateSubnet ID",
      "Value" : { "Fn::GetAtt": ["AZNetworkStackB", "Outputs.PrivateSubnetID"] }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Pack it, generate a new template as /tmp/packed-eks-stacks.json:

$ aws --profile arseniy --region eu-west-2 cloudformation package --template-file eks-root.json --output-template /tmp/packed-eks-stacks.json --s3-bucket eks-cloudformation-eu-west-2 --use-json

Enter fullscreen mode Exit fullscreen mode

Deploy it:

$ aws --profile arseniy --region eu-west-2 cloudformation deploy --template-file /tmp/packed-eks-stacks.json --stack-name eks-dev
Enter fullscreen mode Exit fullscreen mode

Check:

Okay.

Let’s finish here – need to add the following:

  1. RouteTable for the public subnet
    • a Route to the 0.0.0.0/0 via Internet Gateway
    • and a SubnetRouteTableAssociation to attach this RouteTable to the public subnet in this AvailabilityZone
  2. RouteTable for the private subnet
    • a Route to the 0.0.0.0/0 via NAT Gateway
    • and a SubnetRouteTableAssociation to attach this RouteTable to the private subnet in this AvailabilityZone
  3. NAT Gateway
    • Elastic IP for the NAT Gateway
NAT Gateway

To the Resources – add NAT Gateway and Elastic IP:

...
    "NatGwIPAddress" : {
      "Type" : "AWS::EC2::EIP",
      "Properties" : {
        "Domain" : "vpc"
      }
    },
    "NATGW" : {
      "DependsOn" : "NatGwIPAddress",
      "Type" : "AWS::EC2::NatGateway",
      "Properties" : {
        "AllocationId" : { "Fn::GetAtt" : ["NatGwIPAddress", "AllocationId"]},
        "SubnetId" : { "Ref" : "PublicSubnet"},
        "Tags" : [
          {"Key" : "Name", "Value" : { "Fn::Join" : [ "-", [ {"Ref" : "AWS::StackName"}, "nat-gw", { "Ref": "AZ" } ] ] } }
        ]
      }
    }
...
Enter fullscreen mode Exit fullscreen mode
Public RouteTable

Add a RouteTable for public subnets.

For the public route, we need to have an Internet Gateway ID, which is passed from the Region stack to the Root stack, and then to the AvailabilityZones-stack.

Add a RouteTable, one Route to the 0.0.0.0/0 via Internet Gateway and a SubnetRouteTableAssociation:

...
    "PublicRouteTable": {
      "Type": "AWS::EC2::RouteTable",
      "Properties": {
        "VpcId": { "Ref": "VPCID" },
        "Tags" : [
          {"Key" : "Name", "Value" : { "Fn::Join" : [ "-", [ {"Ref" : "AWS::StackName"}, "public-rtb"] ] } }
        ]
      }
    },

    "PublicRoute": {
      "Type": "AWS::EC2::Route",
      "Properties": {
        "RouteTableId": {
          "Ref": "PublicRouteTable"
        },
        "DestinationCidrBlock": "0.0.0.0/0",
        "GatewayId": {
          "Ref": "IGWID"
        }
      }
    },

    "PublicSubnetRouteTableAssociation": {
      "Type": "AWS::EC2::SubnetRouteTableAssociation",
      "DependsOn": "PublicRouteTable",
      "Properties": {
        "SubnetId": {
          "Ref": "PublicSubnet"
        },
        "RouteTableId": {
          "Ref": "PublicRouteTable"
        }
      }
    }
...
Enter fullscreen mode Exit fullscreen mode
Private RouteTable

Similarly in the AvailabilityZones stack add a RouteTable and its resources, but in the Route use NAT Gateway instead of the Internet Gateway:

...
    "PrivateRouteTable": {
      "Type": "AWS::EC2::RouteTable",
      "Properties": {
        "VpcId": { "Ref": "VPCID" },
        "Tags" : [
          {"Key" : "Name", "Value" : { "Fn::Join" : [ "-", [ {"Ref" : "AWS::StackName"}, "priv-route", { "Ref": "AZ" } ] ] } }
        ]
      }
    },

    "PrivateRoute": {
      "Type": "AWS::EC2::Route",
      "Properties": {
        "RouteTableId": {
          "Ref": "PrivateRouteTable"
        },
        "DestinationCidrBlock": "0.0.0.0/0",
        "NatGatewayId": {
          "Ref": "NATGW"
        }
      }
    },

    "PrivateSubnetRouteTableAssociation": {
      "Type": "AWS::EC2::SubnetRouteTableAssociation",
      "Properties": {
        "SubnetId": {
          "Ref": "PrivateSubnet"
        },
        "RouteTableId": {
          "Ref": "PrivateRouteTable"
        }
      }
    }
...
Enter fullscreen mode Exit fullscreen mode

Pack, deploy, check:

Nice – all networks and routes are Up – everything must be working now

At this moment we can spin up an EC2 instance in both Public and Private subnets to check:

  1. SSH to an ЕС2 in the public subnet to check if its network connection is working
  2. SSH from the ЕС2 in the public subnet – to an EC2 in the private subnet, to check the private subnet’s routing
  3. ping from the EC2 in the private subnet somewhere to the world to check if NAT is working

Mappings and CIDRs for subnets

And one more thing I’d like to change in the AvailabilityZones stack is to realize a better way to create and pass CIDRs for subnets.

So, currently, we are passing a full CIDR like 10.0.0.0/16 to the VPCCIDRBlock parameter:

...
    "VPCCIDRBlock": {
      "Description": "VPC CidrBlock",
      "Type": "String",
      "Default": "10.0.0.0/16"
    }
...
Enter fullscreen mode Exit fullscreen mode

And then we need to create 4 dedicated networks with /20 mask – two for public subnets, two for private.

Also, at this moment we just hardcoded those values into the template:

...
        "Parameters": {
          "VPCID": { "Fn::GetAtt": ["RegionNetworkStack", "Outputs.VPCID"] },
          "AZ": { "Fn::Select": [ "0", { "Fn::GetAZs": "" } ] },
          "IGWID": { "Fn::GetAtt": ["RegionNetworkStack", "Outputs.IGWID"] },
          "PublicSubnetCIDR": "10.0.0.0/20",
          "PrivateSubnetCIDR": "10.0.32.0/20"
        }
...
Enter fullscreen mode Exit fullscreen mode

Which is obviously not a too good idea as leaves us no flexibility at all, because we’d like to have an ability pass from a Jenkins-parameter just one block for the VPC, and let the CloufFomration do all the rest.

Let’s see, what do we have to compose such 4 networks /20 for a VPC with the 10.0.0.0/16 block:

  • 10.0 – first two octets, the network “begin”
  • a third octet block – 0, 16, 32, 48
  • and the network mas – /20

Also, we will have VPCs with CIDRs 10.0.0.0/16, 10.1.0.0/16, 10.2.0.0/16 for Dev, Stage, Prod, etc environments.

How can we combine all the data above?

Well – we can use the Fn::Split function to get the first two octets from a VPC CIDR – will get 10.0. or 10.1 and so on.

But what if a VPC CIDR will be 192.168.0.0/16?… Well – then we have to grab the first two octets as dedicated objects.

And for the rest two octets and subnet mask, we could create a CloudFormation Mappings and then combine all together using the Fn::Join function.

Let’s try it – add a mapping to the root stack template:

...
  "Mappings": {
    "AZSubNets": {
      "public": {
        "zoneA": "0.0/20",
        "zoneB": "16.0/20"
      },
      "private": {
        "zoneA": "32.0/20",
        "zoneB": "48.0/20"
      }
    }
  },
...
Enter fullscreen mode Exit fullscreen mode

And now the most interesting part here: in the AZNetworkStackА and AZNetworkStackB resources of the root template in their Parameters instead of the:

...
"PublicSubnetCIDR": "10.0.0.0/20",
...
Enter fullscreen mode Exit fullscreen mode

Need to construct something like:

"<VPC-CIDR-FIRST-OCTET> + <VPC-CIDR-SECOND-OCTET> + <ZONE-FROM-MAPPING>"

I.e:

{ «Fn::Join» : [«.», [ { «VPC-CIDR-FIRST-TWO-OCTETS»] }, «AONE-FROM-MAPPING»] ] } }
Enter fullscreen mode Exit fullscreen mode

To obtain the VPC-CIDR-FIRST-OCTET use the Fn::Select and Fn::Split functions:

{ «Fn::Select» : [«0», { «Fn::Split»: [«.», { «Ref»: «VPCCIDRBlock»}]}] }
Enter fullscreen mode Exit fullscreen mode

And in the same way for the second one, but in the Fn::Select use index 1:

{ «Fn::Select» : [«1», { «Fn::Split»: [«.», { «Ref»: «VPCCIDRBlock»}]}] }
Enter fullscreen mode Exit fullscreen mode

And to select data from the mapping – we can use Fn::FindInMap, where we will use a subnet’s type of public or private and chose by an AvailabilityZone:

{ «Fn::FindInMap» : [«AZSubNets», «public», «zone-a»»] }
Enter fullscreen mode Exit fullscreen mode

So, for the AZNetworkStackА we will have the following code:

...
          "PublicSubnetCIDR": {
            "Fn::Join" : [".", [
              { "Fn::Select": [ "0", { "Fn::Split": [".",  { "Ref": "VPCCIDRBlock"} ] } ] },
              { "Fn::Select": [ "1", { "Fn::Split": [".",  { "Ref": "VPCCIDRBlock"} ] } ] },
              { "Fn::FindInMap" : [ "AZSubNets", "public", "zoneA" ] } 
            ]]
          },                  
          "PrivateSubnetCIDR": { 
            "Fn::Join" : [".", [
              { "Fn::Select": [ "0", { "Fn::Split": [".",  { "Ref": "VPCCIDRBlock"} ] } ] },
              { "Fn::Select": [ "1", { "Fn::Split": [".",  { "Ref": "VPCCIDRBlock"} ] } ] },
              { "Fn::FindInMap" : [ "AZSubNets", "private", "zoneA" ] } 
            ]]
          }
..
Enter fullscreen mode Exit fullscreen mode

And for the AZNetworkStackB in the { "Fn::FindInMap" : ["AZSubNets", "private", "zoneA"] } will use the zoneB selector.

All together our stacks resources have to look like the following:

...
    "AZNetworkStackA": {
      "Type": "AWS::CloudFormation::Stack",
      "Properties": {
        "TemplateURL": "eks-azs-networking.json",
        "Parameters": {
          "VPCID": { "Fn::GetAtt": ["RegionNetworkStack", "Outputs.VPCID"] },
          "AZ": { "Fn::Select": [ "0", { "Fn::GetAZs": "" } ] },
          "IGWID": { "Fn::GetAtt": ["RegionNetworkStack", "Outputs.IGWID"] },
          "EKSClusterName": { "Ref": "EKSClusterName"},
          "PublicSubnetCIDR": {
            "Fn::Join" : [".", [
              { "Fn::Select": [ "0", { "Fn::Split": [".",  { "Ref": "VPCCIDRBlock"} ] } ] },
              { "Fn::Select": [ "1", { "Fn::Split": [".",  { "Ref": "VPCCIDRBlock"} ] } ] },
              { "Fn::FindInMap" : [ "AZSubNets", "public", "zoneA" ] }
            ]]
          },
          "PrivateSubnetCIDR": {
            "Fn::Join" : [".", [
              { "Fn::Select": [ "0", { "Fn::Split": [".",  { "Ref": "VPCCIDRBlock"} ] } ] },
              { "Fn::Select": [ "1", { "Fn::Split": [".",  { "Ref": "VPCCIDRBlock"} ] } ] },
              { "Fn::FindInMap" : [ "AZSubNets", "private", "zoneA" ] }
            ]]
          }
        }
      }
    },
    "AZNetworkStackB": {
      "Type": "AWS::CloudFormation::Stack",
      "Properties": {
        "TemplateURL": "eks-azs-networking.json",
        "Parameters": {
          "VPCID": { "Fn::GetAtt": ["RegionNetworkStack", "Outputs.VPCID"] },
          "AZ": { "Fn::Select": [ "1", { "Fn::GetAZs": "" } ] },
          "IGWID": { "Fn::GetAtt": ["RegionNetworkStack", "Outputs.IGWID"] },
          "EKSClusterName": { "Ref": "EKSClusterName"},
          "PublicSubnetCIDR": {
            "Fn::Join" : [".", [
              { "Fn::Select": [ "0", { "Fn::Split": [".",  { "Ref": "VPCCIDRBlock"} ] } ] },
              { "Fn::Select": [ "1", { "Fn::Split": [".",  { "Ref": "VPCCIDRBlock"} ] } ] },
              { "Fn::FindInMap" : [ "AZSubNets", "public", "zoneB" ] }
            ]]
          },
          "PrivateSubnetCIDR": {
            "Fn::Join" : [".", [
              { "Fn::Select": [ "0", { "Fn::Split": [".",  { "Ref": "VPCCIDRBlock"} ] } ] },
              { "Fn::Select": [ "1", { "Fn::Split": [".",  { "Ref": "VPCCIDRBlock"} ] } ] },
              { "Fn::FindInMap" : [ "AZSubNets", "private", "zoneB" ] }
            ]]
          }
        }
      }
    }
...
Enter fullscreen mode Exit fullscreen mode

Deploy, check:

Actually nothing changed, as we have our CIDRs the same as they were before this change.

eksctl – a stack creation

Finally – let’s spin up a test cluster to check if everything is working, and then we can go to the Ansible and its roles.

Take the necessary parameters from the Outputs of the root stack:

We can create directories now for the future Ansible eksctl rile, in the same ways as we did it at the very beginning of this post for the CloudFormation:

$ cd ../../../
$ mkdir -p roles/eksctl/{templates,tasks}
Enter fullscreen mode Exit fullscreen mode

Now, create a cluster’s config-file eks-cluster-config.yml:

$ touch roles/eksctl/templates/eks-cluster-config.yml
$ cd roles/eksctl/templates/
Enter fullscreen mode Exit fullscreen mode

Set the cluster’s parameters here:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: eks-dev
  region: eu-west-2
  version: "1.15"
nodeGroups:
  - name: worker-nodes
    instanceType: t3.medium
    desiredCapacity: 2
    privateNetworking: true
vpc:
  id: "vpc-00f7f307d5c7ae70d"
  subnets:
    public:
      eu-west-2a:
        id: "subnet-06e8424b48709425a"
      eu-west-2b:
        id: "subnet-07a23a9e23cbb382a"
    private:
      eu-west-2a:
        id: "subnet-0c8a44bdc9aa6726f"
      eu-west-2b:
        id: "subnet-026c14589f4a41900"
  nat:
    gateway: Disable
cloudWatch:
  clusterLogging:
    enableTypes: ["*"]
Enter fullscreen mode Exit fullscreen mode

Create the cluster:

$ eksctl --profile arseniy create cluster -f eks-cluster-config.yml
Enter fullscreen mode Exit fullscreen mode

Pay attention here on the names used by the eksctl – it will append eksctl + + cluster for the cluster’s name – consider this when we will start writing Ansible roles.

The process to spin up an AWS Elastic Kubernetes Service cluster will take around 15-20 minutes, and after this CloudFormation will create another stack, for the Worker Nodes, so we can have some tea (or beer) here.

Wait for the Worker Nodes to be started:

...
[ℹ] nodegroup "worker-nodes" has 2 node(s)
[ℹ] node "ip-10-0-40-30.eu-west-2.compute.internal" is ready
[ℹ] node "ip-10-0-63-187.eu-west-2.compute.internal" is ready
[ℹ] kubectl command should work with "/home/setevoy/.kube/config", try 'kubectl get nodes'
[✔] EKS cluster "eks-dev" in "eu-west-2" region is ready
Enter fullscreen mode Exit fullscreen mode

Check:

The stack and cluster are ready.

Our local kubectl already has to be configured by the eksctl – check the current context:

$ kubectl config current-context
arseniy@eks-dev.eu-west-2.eksctl.io
Enter fullscreen mode Exit fullscreen mode

Check access to the cluster and its nodes:

$ kubectl get nodes
NAME                                        STATUS   ROLES    AGE   VERSION
ip-10-0-40-30.eu-west-2.compute.internal    Ready    <none>   84s   v1.15.10-eks-bac369
ip-10-0-63-187.eu-west-2.compute.internal   Ready    <none>   81s   v1.15.10-eks-bac369
Enter fullscreen mode Exit fullscreen mode

Well, that’s all for now – we are done with the CloudFormation here.

The second part – AWS: Elastic Kubernetes Service — автоматизация создания кластера, часть 2 — Ansible, eksctl (in Russian yet, will be translated shortly).

Useful links

Kubernetes

Ansible

AWS

EKS
CloudFormation

Similar posts

Top comments (0)