Building a Kubernetes cluster is hard, so why not use a managed Kubernetes service? In the previous post I added subnets to a VPC. In this post, I’ll use the VPC to create an AWS EKS cluster.
The complete project is available on GitHub.
Configuration
The minimum configuration you need to create a cluster are a name for the cluster, the Kubernetes version you want to use, and an IAM role your cluster will use. While optional, specifying the types of logs your cluster will send to CloudWatch can be very useful when debugging and troubleshooting. The configuration below, which you can copy/paste into the YAML file from the previous blog, has four parameters. The parameter eks:cluster-name
is the name of the cluster. The parameter eks:k8s-version
is the version of Kubernetes to use (at the time of writing 1.14 was the latest available version). The parameter eks:cluster-role-arn
has the ARN of the IAM role your cluster needs to perform tasks. For more details on how to create the role, check out the AWS docs. Lastly, the parameter eks:cluster-log-types
is a comma separated list of all the components that can emit logs.
eks:cluster-name: myEKSCluster
eks:k8s-version: "1.14"
eks:cluster-role-arn: "arn:aws:iam::ACCOUNTID:role/ServiceRoleForAmazonEKS"
eks:cluster-log-types: "api,audit,authenticator,scheduler,controllerManager"
You can either use the command line, like pulumi config set eks:cluster-name "myEKSCluster"
to add these new configuration variables, or you can add them directly into the yaml file. The yaml file with all the configuration is called Pulumi.<name of your project>.yaml
.
Creating the cluster
The code below is an extension from the code created in the previous post. So you can copy/paste this snippet into your Go code too. Walking through the code, it gets the configuration for the cluster name and enabled log types from the YAML file. It also uses the subnets created in the previous post. The call to eks.NewCluster()
will tell Pulumi to go create the EKS cluster in the VPC you have, using the subnets you already have created too.
// Create an EKS cluster
clusterName := getEnv(ctx, "eks:cluster-name", "unknown")
enabledClusterLogTypes := strings.Split(getEnv(ctx, "eks:cluster-log-types", "unknown"), ",")
clusterArgs := &eks.ClusterArgs{
Name: clusterName,
Version: getEnv(ctx, "eks:k8s-version", "unknown"),
RoleArn: getEnv(ctx, "eks:cluster-role-arn", "unknown"),
Tags: tags,
VpcConfig: subnets,
EnabledClusterLogTypes: enabledClusterLogTypes,
}
cluster, err := eks.NewCluster(ctx, clusterName, clusterArgs)
if err != nil {
fmt.Println(err.Error())
return err
}
ctx.Export("CLUSTER-ID", cluster.ID())
Running the code
Like the previous time, the last thing to do is run pulumi up
to tell Pulumi to go create an EKS cluster inside your VPC! If you’re using the same project and stack, Pulumi will automatically realize it needs to create the cluster inside the existing VPC and won’t create a new VPC. Running this code might take a little while for AWS to complete creating the cluster (in my case it took almost 10 minutes).
$ pulumi up
Previewing update (builderstack):
Type Name Plan
pulumi:pulumi:Stack builder-builderstack
+ └─ aws:eks:Cluster myEKSCluster create
Outputs:
+ CLUSTER-ID: output<string>
Resources:
+ 1 to create
4 unchanged
Do you want to perform this update? yes
Updating (builderstack):
Type Name Status
pulumi:pulumi:Stack builder-builderstack
+ └─ aws:eks:Cluster myEKSCluster created
Outputs:
+ CLUSTER-ID: "myEKSCluster"
SUBNET-IDS: [
[0]: "subnet-<id>"
[1]: "subnet-<id>"
]
VPC-ID : "vpc-<id>"
Resources:
+ 1 created
4 unchanged
Duration: 9m55s
Permalink: https://app.pulumi.com/retgits/builder/builderstack/updates/3
The permalink at the bottom of the output takes you to the Pulumi console where you can see all the details of the execution of your app and the resources that were created.
Top comments (0)