DEV Community

Cover image for Jenkins Pipelines and their dirty secrets 2.
Richard Lenkovits
Richard Lenkovits

Posted on • Updated on

Jenkins Pipelines and their dirty secrets 2.

Solving parameterizing problems with Pipelines:

In the following section, I will go through several Jenkins workflow handling examples with pipelines. Let's not waste time and see the code!


1. Run parameterized jobs

When you create a pipeline, you can set it to be parameterized:
alt text
If you want to use these parameters in the pipeline script you can refer to them as params.your_parameter. Now I'm going to confuse you: You can declare parameters in your pipeline script too!

#!/usr/bin/env groovy
pipeline {
    agent any
        parameters {
            string(
                name: 'MyString',
                defaultValue:"default",
                description: "Any")
...
Enter fullscreen mode Exit fullscreen mode

Indeed nothing stops you from putting a parameters block in your pipeline script. So what will happen if you build the job now? Here's the answer: In this case, when your pipeline job builds for the first time, it will clear all the previous config set in your job, parameterize it according to the parameters block of the pipeline script, and run it with its default parameters. If you do this and have a look at your job's Console Output, you will see the following message, which warns you that your config in the jenkis job will be lost, and replaced with the one in the pipeline script: WARNING: The properties step will remove all JobPropertys currently configured in this job, either from the UI or from an earlier properties step.
Anyway, this is how you set parameters in your Pipeline Script, and use it, for example to run another job with parameters:

#!/usr/bin/env groovy

pipeline {
    agent any
        parameters {
            choice(
                name: 'Nodes',
                choices:"Linux\nMac",
                description: "Choose Node!")
            choice(
                name: 'Versions',
                choices:"3.4\n4.4",
                description: "Build for which version?" )
            string(
                name: 'Path',
                defaultValue:"/home/pencillr/builds/",
                description: "Where to put the build!")
    }
    stages {
        stage("build") {
            steps {
                script {
                    build(job: "builder-job",
                        parameters:
                        [string(name: 'Nodes', value: "${params.Nodes}"),
                        string(name: 'Versions', value: "${params.Versions}"),
                        string(name: 'Path', value: "${params.Path}")])
                }
            }
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

2. Initialize parameterized job:

One important thing to know: It is handy to create the pipeline job as a parameterized job, with all these parameters in it, because if you don't do it, you will not be able to choose parameters when you build the job the first time!
Set your job to be parameterized:
alt text
Anyway, there is another way (explained in the above section too!):
When a job, -that has parameters block in it's Jenkinsfile - runs, it clears all the previous parameters set in the job's config and overwrites it with the ones in the Jenkinsfile. This is a kind of first dry run, which sets the parameters, and at the second run, you can run the job as a parameterized job even if you didn't create it as one.
Unfortunately, in this case, it can happen that at the first dry run, when you can't set your parameters yet, your job basically just starts running with its default parameters. This can be bad, as you may not want that.
For this scenario I offer a workaround:
Create an Initializer choice parameter in the jenkinsfile with a "Yes" default.
choice(name: 'Invoke_Parameters', choices:"Yes\nNo", description: "Do you whish to do a dry run to grab parameters?" )
Then create a first stage, which checks if this is an initial run. If it is, it will abort the build.

    stages {
        stage("parameterizing") {
            steps {
                script {
                    if ("${params.Invoke_Parameters}" == "Yes") {
                        currentBuild.result = 'ABORTED'
                        error('DRY RUN COMPLETED. JOB PARAMETERIZED.')
                    }
                }
            }
        }
Enter fullscreen mode Exit fullscreen mode

As this is after the parameters block, your job will be parameterized when you run it the second time, and you can unset the Invoke_Parameters choice parameter.


3. Inheriting job variables from previous jobs:

Previously in build flows, it was a common thing, that you could pass parameters to jobs, and get the resulting AbstractBuild when required.

b = build( "job1", param1: "foo", param2: "bar" )
build( "job2", param1: b.build.number )
Enter fullscreen mode Exit fullscreen mode

There is a way for this in Pipeline. You can reach these as buildVariables:

#!/usr/bin/env groovy

pipeline {
    agent any
    stages {
        stage("build") {
            steps {
                script {
                    def b = build(job: "parameter-source-job", propagate: false)
                    build(
                        job: "analyzer-job",
                        parameters: [
                            [
                                $class: 'StringParameterValue',
                                name: 'PARAM_ONE',
                                value: b.buildVariables.PARAM_ONE
                            ],
                            [
                                $class: 'StringParameterValue',
                                name: 'PARAM_TWO',
                                value: b.buildVariables.PARAM_TWO
                            ]
                        ]
                    )
                    if (b.result == 'FAILURE') {
                            error("${b.projectName} FAILED")
                    }
                }
            }
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

4. Pipeline asking for input during build

Pipeline scripts offer a syntax which is useful, in case you need to ask for permissions or data during the build.
Imagine a scenario in which I need to choose some parameters for the build depending on some other parameter I previously have chosen.
This is where the user input choice comes handy.

  • Here first a list of nodes (strings) are generated by a bash script in the node block, during an initial run, stopped by the parameterizing stage.
  • During the second run, we choose a Node parameter from this list of nodes, as we run the job as a parameterized build.
  • After this, in the choose version stage, another bash script is run with the node parameter as its first parameter, which generates the list of node versions for that node.
  • At the input message part the job stops, and we can choose from these versions.
  • In the end the job runs another job with our parameters.
#!/usr/bin/env groovy

def nodes
def versions

node {
    dir('/home/pencillr/workspace') {
        nodes = sh (script: 'sh list_nodes.sh', returnStdout: true).trim()
    }
}

pipeline {
agent any
    parameters {
            choice(name: 'Invoke_Parameters', choices:"Yes\nNo", description: "Do you whish to do a dry run to grab parameters?" )
            choice(name: 'Nodes', choices:"${nodes}", description: "")
    }
    stages {
        stage("parameterizing") {
            steps {
                script {
                    if ("${params.Invoke_Parameters}" == "Yes") {
                        currentBuild.result = 'ABORTED'
                        error('DRY RUN COMPLETED. JOB PARAMETERIZED.')
                    }
                }
            }
        }
        stage("choose version") {
            steps {
                script {
                    def version_collection
                    def chosen_node = "${params.Nodes}"
                    dir('/home/pencillr/workspace') {
                         version_collection = sh (script: "sh list_versions.sh $chosen_node", returnStdout: true).trim()
                    }
                        versions = input message: 'Choose testload version!', ok: 'SET', parameters: [choice(name: 'TESTLOAD_VERSION', choices: "${version_collection}", description: '')]


        stage("build") {
            steps {
                script {
                    build(job: "builder-job",
                        parameters:
                        [string(name: 'Nodes', value: "${params.Nodes}"),
                        string(name: 'Versions', value: "${versions}"),
                        ])
                }
            }
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Top comments (20)

Collapse
 
pinkeen profile image
Filip Sobalski

I found another way to prevent jenkins from overriding parameter values with each run. Seems to be working with latest declarative pipelines:

string(name: 'BRANCH', defaultValue: params.BRANCH ?: 'master')
Enter fullscreen mode Exit fullscreen mode

If no param yet then it will use master, otherwise it will use whatever has been set in the defaults "manually" in the job configuration.

Collapse
 
ajaysuwalka profile image
Ajay Suwalka

Thanks a lot, Crazily This info is not available any where.

Collapse
 
siddarajugc profile image
siddarajugc

HI Filip, Thanks a lot for this info.
I can use above code for string parameter but not for boolean Or choice parameter, can you please provide the details on how to use same for boolean/choice parameter

Collapse
 
mikept profile image
Miguel C • Edited

I was new to Jenkisfiles. I'm using scripted pipeline and this posted helped me a lot, so thanks.

There is still an issue I'm trying to work around and had no luck so far.

As you stated "when you can't set your parameters yet, your job basically just starts running with its default parameters. This can be bad, as you may not want that."

I've used you're trick to avoid this but tbh in my case it doesn't make much of a diference because they will still be set the second time... and on a 3rd run it will still have the parameters from the second run...

In my case this is important because I'm trying to run 3 diferent tools that share some parameters but differ on others...

Using input and a switch case I can add those in later... what I do is to set the parameters they all share in an initParams var

so I do:

props = []
initParams = [
    choice(
        description: "Please specify which tool to run.",
        name: "tool",
        choices: "tool1\ntool2\ntool3"
    )
]

tool1Params = [ .... ]

switch (params.tool) {
    case "tool1":
        extraParams = input( id: "extraParams",
            message: "Tool 1 was selected,  please provide extra params:",
            parameters: tool1params
        )
        initParams +=  tool1params
        break
   .....
}


props.add(parameters(initParams))
properties(props)

with this in the first run, I sue "build now" and now "Build with parameters is available" showing only the initParams and moving to a input asking for the rest...

The problem is if I go to "Parameters" it still tells me I've only used the intParams... and I want to be able to see all there...

If I don't to the workaround it will show all args there.... but if I run it again for "tool2" it will list always the parameters used in the first run...

Sadly jenkins docs aren't the best... specially for Jenkisfiles and scripted pipeline so maybe you know another trick to accomplish this or tell me I'm doing something very wrong :P

Collapse
 
mikept profile image
Miguel C

Replying to myself... from what I was able to google "properpties" can only be used once in the Jenkinsfile and it also overwrites whatever was in the UI (if we are using it which is not my case)

So what I want is pretty much impossible to do.

All I can do is have the initial args in the properties then use input for the rest (as I do, but not attempt to set them in properties(parameters(...)) as this would simple configure the job to use whatever my first choice is.

Would be great if we could save the parms for "input" too as this offers easy visibility, but it seems it is not the case.

Collapse
 
kishand261194 profile image
D KISHAN

Hi, Is there a way, through with i can pass a value to the job and use it in the job ? I want to send the branch name from pipeline, which can be used in the job to build , is it possible?

Thread Thread
 
mikept profile image
Miguel C

the branch name should be available with the workspace env env.BRANCH_NAME

I have a few jobs where I use it to only allow triggered jobs if the branch is master.

if (env.BRANCH_NAME == 'master')
  props.add(pipelineTriggers([
                parameterizedCron('''
                H 19 * * * %param=value
            ''')]))

properties(props)
Thread Thread
 
gusgonnet profile image
Gustavo

Hi, I believe that BRANCH_NAME is only populated when running a multi-branch pipeline. FYI then.
thanks
Gustavo.

Collapse
 
muwlgr profile image
Wladimir Mutel

Dear Mr. Lenkovits,

I read these communications from 2 years ago
issues.jenkins-ci.org/browse/JENKI...
(describing example need but closed as a duplicate of
issues.jenkins-ci.org/browse/JENKI...
where this feature was planned but its implementation had never progressed)

Also I read SO communications like this : stackoverflow.com/questions/443979...
and conclude that I am not alone with this need (to pass all parameters from top-level job to down-level when calling them through 'build' function)

And apparently Jenkins still does not allow to do this in short and easy way.
Do you have your own ideas on how to resolve this problem ?
I am starting to lose my hope a bit.
Had to glue my separate jobs into one monster-job with parts controlled by Boolean parameters.
Would appreciate your useful response. Thank you in advance.

Collapse
 
pencillr profile image
Richard Lenkovits

Hi,
To begin with, if I was supposed to solve such a problem I'd definitely try to figure out if I could eliminate having too many parameters for a jenkins job. This is not the best possible design, and it can be eliminated with modularized thinking and good configuration management.

To answer the question, I don't think Jenkins has a simple solution for this. I would solve the issue somehow probably with using config files, or property files to pass many values. That could be done using pipeline-utility-steps plugin, see here: github.com/jenkinsci/pipeline-util...

Collapse
 
ravikakadia profile image
ravikakadia • Edited

I have been handed over a parameterized pipeline job which has default value set to String A, now I have been asked to set this default value to String B. I configured String B in the default value text box and saved the job. This job is scheduled one so when this job is triggered by system it runs with String A instead of String B (I was expecting it to run with String B) and on UI too the default value is changed to String A. Why is this behaviour? Do i need to change the parameter default value in Jenkinsfile?

Collapse
 
pencillr profile image
Richard Lenkovits

Hi there! I can't be sure without seeing the code but my guess is yes. Parameter declarations in the jenkinsfile overwrite the ones in the job signature. There's an alternative solution too, see here.

Collapse
 
ravikakadia profile image
ravikakadia

I will post the screen shots later today however I think you are right changing the Jenkins file will work for me.

Thread Thread
 
ravikakadia profile image
ravikakadia

Hi, it worked after making changes in the jenkinsfile.
Jenkinsfile is hosted on github repo which is being check-out at the time of job run. We are able to run with required default values after changing them in Jenkinsfile.
Thanks a lot for this article and help.

Collapse
 
guillecasco1 profile image
Guille Casco • Edited

Hey what about if I want to use the user input during pipeline job, but instead of pre set choices, I want that choice be filled in a drop box like the extensible choice parameter with some files in Workspace folder. I actually can do this but before the job starts like I show you in images. But what if I want that in the workspace folder, I can choose those files during pipeline in step 2 o 3 for example after code is checkout and have the newest SQL files ? Is this possible? Thanks man. (edit) hmmm images are not loading

Collapse
 
zaphodikus profile image
Conrad Braam🤖

I was hopeing for a strategy to make it easy to re-run a parameterized job, but to have the last values populated in the defaults, but obviously only when run interactively. Is there a mechanic for doing this using "storage" that get persisted across runs, can I store data by pushing stuff into the properties/props collection? Like at the end of a job I could write these "last used" values back into the job?

Collapse
 
gusgonnet profile image
Gustavo

nice post Richard!!!

Collapse
 
biancagotaski profile image
Bianca Gotaski • Edited

I have a situation where I would like to create dynamic parameters like this:
PARAMETER X is an INPUT type, where the user will insert the number of files (which refers to the second parameter Y)
So if the user says that will be 3 files, it will create 3 parameters as INPUT type so the user can specify the path for each file.
I've tried with a FOR() using the first parameter as a condition but it didn't work.
Do you have any suggestions?
PS: great article by the way!

Collapse
 
pencillr profile image
Richard Lenkovits

There are workarounds, for example you could redefine parameters with multiple pipeline runs. But I wouldn't suggest that, it's not a clean way to begin with.
You should simply use a text parameter, where the user could insert the file paths separated by newline or space, and collect/validate the individual parameters in code during runtime.

Collapse
 
rushikeshkalyani profile image
rushikeshkalyani

Thanks Richard! Information is so helpful! :)