CircleCI pipelines are defined using the YAML syntax, which has been widely adopted by many software tools and solutions. YAML is a human-readable declarative data structure used in in configuration files (like those for CircleCI pipelines) and in applications where data is being stored or transmitted. The data in pipeline configuration files specifies and controls how workflows and jobs are executed when triggered on the platform. These pipeline directives in configuration files tend to become repetitive, which can result in situations where the config syntax grows in volume. Over time, this increased volume makes the config harder to maintain. Because YAML is a data structure, only minimal syntax reusability capabilities (Anchors and Aliases) are available to address the increased volume. Anchors and Aliases are too limited to be useful for defining CI/CD pipelines. Fortunately, CircleCI configuration parameter features provide robust capabilities for encapsulating and reusing functionality data that would otherwise be redundant.
In this post, I will introduce pipeline configuration parameters and explain some of the benefits of adopting them in your pipeline configurations.
What are configuration parameters?
Executors, jobs and commands are considered objects within a pipeline configuration file. Like objects in the Object Oriented Programming (OOP) paradigm, pipeline objects can be extended to provide customized functionality. CircleCI configuration parameters let developers extend the capabilities of executors, jobs, and commands by providing ways to create, encapsulate, and reuse pipeline configuration syntax.
Executors, jobs, and commands are objects with their own individual properties. Parameters also have their own distinct properties that can interact with the object. The composition and properties for parameters include:
parameters:
|_ parameter name: (Specify a name the parameter)
|_ description: (Optional. Describes the parameter)
|_ type: (Required. datatype string, boolean, integer, enum)
|_ default: (The default value for the parameter)
When should I use parameters in a pipeline config?
Use parameters when data and functionality is repeated within pipelines. In other words, if your pipelines have any executors, jobs, or commands that are defined or executed in your pipelines more than once, I recommend identifying those patterns or elements and defining them as parameters within your configuration file syntax. Using parameters gives you the ability to centrally manage and maintain functionality, and dramatically minimizes redundant data and the total lines of syntax in configuration files. The ability to provide variable parameter arguments is also a benefit, and the overall readability of the pipeline syntax is improved as well.
How do I create parameters?
As I mentioned earlier, executors, jobs, and commands are the configuration elements that can be extended with parameters. Deciding which of these elements to extend will depend on your specific use case.
NOTE: The parameters features are available only in CircleCI version 2.1 and above. The version must be defined at the top of the config file like this: version: 2.1
for the parameters to be recognized by the platform.
Let me give you an example of defining a parameter for the parallelism quantities within a jobs:
object:
version: 2.1
jobs:
build_artifact:
parameters:
parallelism_qty:
type: integer
default: 1
parallelism: << parameters.parallelism_qty >>
machine: true
steps:
- checkout
workflows:
build_workflow:
jobs:
- build_artifact:
parallelism_qty: 2
In this example, the build-artifact:
job has a parameters:
key defined with the name parallelism_qty:
. This parameter has a data type of integer, and a default value of 1. The parallelism: key is a property of the jobs:
object and defines the number of executors to spawn and execute commands in the steps:
list. In this case, the special checkout
command will be executed on all the executors spawned. The job’s parallelism:
key has been assigned the value « parameters.parallelism\_qty »
, which references the parallelism\_qty:
parameter definition defined above it. This example shows how parameters can add flexibility to your pipeline constructs, and provide a convenient way to centrally manage functionality that is repeated in pipeline syntax.
Using parameters in job objects
Using the previous example, the parallelism_qty:
parameter in the workflow block demonstrates how to use parameters within configuration syntax. Because the parallelism_qty:
is defined in a job object, it can be executed as a job specified in a workflow.
The workflows:
block has a jobs:
list that specifies the build_artifact:
. It also assigns a value of 2 executors to the parallelism_qty:
, which will spawn 2 executors and execute the commands in the steps:
list. If that value was 3, then the build_artifact job would spawn 3 executors and run the commands 3 times.
Executors, jobs, and commands are objects with properties that can be defined, customized, and reused throughout pipeline configuration syntax.
Reusing executor objects in pipeline config
The previous section demonstrates how to define and use parameters within a jobs object. In this section, I will describe how to use parameters with executors. Executors define the runtime or environment used to execute pipeline jobs and commands. Executor objects have a set of their own unique properties that parameters can interact with. This is an example of defining and implementing reusable executors:
version: 2.1
executors:
docker-executor:
docker:
- image: cimg/ruby:3.0.2-browsers
ubuntu_20-04-executor:
machine:
image: 'ubuntu-2004:202010-01'
jobs:
run-tests-on-docker:
executor: docker-executor
steps:
- checkout
- run: ruby unit_test.rb
run-tests-on-ubuntu-2004:
executor: ubuntu_20-04
steps:
- checkout
- run: ruby unit_test.rb
workflows:
test-app-on-diff-os:
jobs:
- run-tests-on-docker
- run-tests-on-ubuntu-2004
This example shows how to define and implement reusable executors in your pipeline config. I used the executors:
key at the start of the file to define 2 executors, one named docker-executor:
and one called ubuntu_20-04-executor:
. The first specifies using a Docker executor and the second specifies a machine executor using an Ubuntu 20.04 operating system image. Predefining executors this way enables developers to create a list of executor resources to be used in this pipeline, and to centrally manage the various properties related to executor types. For instance, the Docker executor has properties that do not pertain to and are unavailable to the machine executor, because the machine executor is not of the type docker
.
Reusing objects keeps the amount of syntax to minimum while providing terse object implementations that optimize code readability and central management of functionality.
The jobs:
block defines run-tests-on-docker:
and run-tests-on-ubuntu-2004:
; both have a an executor:
key specified with a value assigned as the appropriate executor for that job. The run-tests-on-docker:
job executes its steps using the docker-executor
definition and the run-tests-on-ubuntu-2004:
job executes on the ubuntu_20-04
definition. As you can see, pre-defining these executors in their own stanza makes the config syntax easier to read, which will make it easier to use and maintain. Any changes to executors can be made in the respective definition and will propagate to any jobs that implement them. This type of centralized management of defined executors can also apply to jobs and command objects that are defined in a similar way.
In the workflows:
block, the test-app-on-diff-os:
workflow triggers two jobs in parallel that execute unit-tests in their respective executor environments. Running these tests using different executors is helpful when you want to find out how applications will behave in different operating systems. This type of test is common practice. The take away here is that I defined the executors just once and easily implemented them within multiple jobs.
Reusable command objects
Commands can also be defined and implemented within config syntax, just like executors and jobs can. Although command object properties differ from executors and jobs, defining and implementating them is similar. Here is an example showing reusable commands:
version: 2.1
commands:
install-wget:
description: "Install the wget client"
parameters:
version:
type: string
default: "1.20.3-1ubuntu1"
steps:
- run: sudo apt install -y wget=<< parameters.version >>
jobs:
test-web-site:
docker:
- image: "cimg/base:stable"
auth:
username: $DOCKERHUB_USER
password: $DOCKERHUB_PASSWORD
steps:
- checkout
- install-wget:
version: "1.17.0-1ubuntu1"
- run: wget --spider https://www.circleci.com
workflows:
run-tests:
jobs:
- test-web-site
In this example, a reusable command has been defined and implemented by a job. The command:
key at the top of of the config defines a command named install-wget:
that installs a specific version of the wget client. In this case, a parameter is defined to specify which wget version number to install. The default:
key installs the default value of 1.20.3-1ubuntu1
if a value is not specified. The steps:
key lists 1 run:
command that installs the version of wget specified in the versions:
parameter. The versions:
parameter is referenced by the << parameters.version >>
variable.
As shown in the example, the defined command can be implemented and used by job objects. The steps stanza in the test-web-site:
job implements the - install-wget:
command. Its version:
parameter is set to an earlier, older version of wget, not the default version value. The last run:
command in the job uses wget to test a response from the given URL. This example runs a simple test to check if a website is responding to requests.
The workflows:
block, as usual, triggers the - test-web-site
job, which executes the reusable install-wget
command. Just like executors and jobs, commands bring the ability to reuse code, centrally manage changes, and increase the readability of the syntax within pipeline configuration files.
Conclusion
In this post, I described the basics of using pipeline parameters and reusable pipeline objects: executors, jobs, and commands. The key takeaways:
- Executors, jobs, and commands are considered objects with properties that can be defined, customized, and reused throughout pipeline configuration syntax
- Reusing these objects helps keep the amount of syntax to minimum while providing terse object implementations that optimize code readability and central management of functionality
Although I wrote this post to introduce you to the concepts of parameters and reusable objects, I would like to encourage you to review the Reusable Config Reference Guide. It will help you gain a deeper understanding of these capabilities so that you can to take full advantage of these awesome features.
I would love to know your thoughts and opinions, so please join the discussion by tweeting to me @punkdata.
Thanks for reading!
Top comments (0)