DEV Community

Cover image for Kubernetes Configuration Management with Ansible (Part 1)
Baptiste Mille-Mathias
Baptiste Mille-Mathias

Posted on • Edited on

Kubernetes Configuration Management with Ansible (Part 1)

Since day 0 I started to administrate Openshift last year, one of the core thing I wanted we tackle with the team was how to manage all customization we would have to do. I'm used to automation software, from bare shell scripts (yes this is automation :) to things like Puppet, CFengine or Ansible and I can't imagine now managing an application, a fleet of nodes, a cluster without being able to automate the deployment of the configuration.

So I share here - not in details - what we did in our team. I'll split the explanation in a couple of posts. This is quite simple but effective, we started recently to run regularly this job from AWX in check mode to observe configuration drift and may run it in run mode sooner or later.

Set a plan

We started to make some status & requirements with the colleagues (this is ALWAYS important to do so)

  • We will administrate several clusters.
  • A big part of the clusters configuration will be the same for all clusters.
  • We must be able to deploy for each cluster some specific manifests
  • The same manifests will deployed on several cluster but may differ slightly (quota values for instance), and we don't want to store all these manifests just for the sake of different values inside. So we must be able have placeholder in the manifests which would be filled with each cluster values.
  • We must be able to add manifests but also to remove manifests from the cluster.

Alt Text

Directory structure

Here is an example of the structure we choose

project-dir
├── config
│   ├── common.yml
│   ├── dev.yml
│   └── prod.yml
└── manifests
    ├── common
    │   ├── 05_authentication
    │   ├── 10_feature_foo
    │   └── 30_feature_baz
    ├── dev
    │   ├── 40_feature_dev_buz
    │   └── 60_feature_dev_buu
    └── prod
        ├── 50_feature_prod_bar
        └── 70_feature_prod_boo

Directory config contains a yaml file for each cluster, with the connection details and all specific variables, the file common.yml contains common and default variables.

# prod config file
connection:
  - url: https://prd.prj.domain.tld
    token: !vault....

ldap_connection:
  - bind_user: cn=ldap-prd-user,...
    bind_password: !vault...

authorized_groups:
  -  group-dev-1
  -  group-dev-2
  -  group-dev-3

# other prod specific variables

File naming Convention

We think after about a naming convention for the manifest files, not that it would enforce the content in the file (exception made to the status field), but to permit to easily spot a manifest file on the disk. We came up with convention XX_kind_name_namespace_status.yml where

  • XX is a number to which help applying manifest in a specific order.
  • kind is the kind of the manifest (deployment, configMap, ...).
  • name is the value of the file metadata.name of the object to be modified.
  • namespace is the namespace the object belongs to. If the object is not namespaced, like namespace or clusterrole, we set the value to global.
  • status has either value present to ensure it exists or absent to ensure it does not exist. This value in the name of the file will be to the module k8s and will enforce the state of the manifest.

few examples

To better understand how it works, let have a look

  • a manifest file to create a namespace foobar would be named 10_namespace_foobar_global_present.yml
  • a deployment named fluentd in namespace cluster-logging would be named 30_deployment_fluentd_cluster-logging_present.yml
  • a manifest to delete a configMap named my-config in namespace dev-hideout would be named XX_configmap_my-config_dev-hideouts_absent.yml

Now we had an overview of the structure of the project, the next post will explain the ansible playbook.

Top comments (0)