DEV Community

Cover image for My Introduction to Jenkins
Corey McCarty
Corey McCarty

Posted on • Originally published at coreydmccarty.dev on

My Introduction to Jenkins

Cover image by undraw.co

Last fall I had the joy of working through a PI in my company's cloud readiness dojo, and it was really fun and exciting to those of us that like to play with new tech (oddly enough, not my entire dev team). The overall goal of this activity was to take our existing applications and make them function in the cloud that our internal Platform as a Service (Paas) team had put together.

What We Had

The team owns around 15 Java applications; nearly half of which are SOAP web services that are deployed into Weblogic servlet containers. Weblogic handles most of your connection related things externally from the application. There was a developer framework library that we used from another internal team that did many things for us. There was a nice neat logging configuration with four types of logs that each rolled through 20 files automatically using an internal flavor of Log4j. Any configuration that needed to be done was externalized into the file system so that when you needed to change something then you could just update the config file and bounce the servlet container. Our build process had been recently converted from Ant to Gradle, and we'd adopted a Nexus artifact repository where we stored our built products.

Where We Were Going

The deployment platform that we were instructed to adopt involves Pivotal Cloud Foundry and a Cloudbees Jenkins pipeline utility that is shared by most of the company. Splunk is used for logging. Springboot cloud config server is utilized in place of properties files, and for all practical purposes the filesystem doesn't exist.

Note: None of us really knew anything about any of these tools.

The Experience

In the first few days we went through PI planning; listing our existing things and what we needed to accomplish. They had a custom Springboot Initializr that saved us a week or two of time, after we got all of our credentials setup. It was really easy to get space allocated in PCF and to get a simple build to deploy something into that space. Next we went on to get our configuration server in place (it was mostly generated from the Initializr) and the repository for our config files that it would serve. After all of that really fun and exciting stuff we had a demo REST endpoint running in PCF with Springboot and reading its config from the cloud config server. COOL!

It wasn't long after that when we settled into the monotony of moving several thousand lines of code into the new project. We did take the opportunity to cleanup some things like needless package depths and a ton of commented out code. We moved in the endpoint classes and began to bring over only the classes necessary to make them run. The idea being that we may shake off some cruft in the process. After the main code we would then take a few weeks to move in 415 classes full of junits that build their data in the actual database in a special schema, and we still can't figure out how to prevent these from initializing the application context between every Junit class.

Still learning

We left with a (just) functioning SOAP webservice running with Springboot jar and built-in Tomcat. It was able to deploy into our two PCF test environments but not our still-necessary VM test environments.

We formed a plan and instruction document for what we'd done up to that point, and our offshore team was able to do some really amazing things and get another 5 application migrated to a similar point in the next 5 months. While there were some performance enhancements and contract expectations to resolve (exceptions should match those from the previous version of the application), the applications are Cloud Native!

I personally started doing the additional Jenkins pipeline enhancements While this has somehow turned into a very long story (I don't do short very well), it is actually my intention to write about what I've learned about Jenkins.

JENKINS (for real this time)

The first things that we learned about the tool was that it is a CI/CD/CT platform that lets us deploy into PCF (with internally built tools from our Paas team), and that it can hold our secrets to prevent their having to be placed into our code or configuration repositories. That's really cool!

The more that I've spent time working on Jenkins pipelines, the more I come to realize that Jenkins (and let's face it CI/CD pipelines in general) is just a fancy UI/automation for running scripts. Much of what we use is Groovy scripts that were put into a reference library that we can bring in for effortless interaction with the PCF environment and running our Gradle builds with Java that someone else manages updates for. When you get to the point of interacting with VM's for other deployments, it is done by executing SSH commands against your Linux servers. As such, there is a lot of shell scripting involved.

Our Jenkins pipeline is a multi-branch pipeline that executes our Jenkinsfile definition. This allows us to build from different branches of our Gitlab repositories. The basic form of a Jenkinsfile can be found below (although to be honest I'm not 100% sure what of this is standard and what is given to us by our Paas team reference pipeline).

library 'fu'
library 'bar'

pipeline{
    agent any
    options{
        buildDiscarder(logRotator(numToKeepStr: '10'))
    }
    parameters {
        choice(name: 'PARAM_ONE', choices: 'fu\nbar\nbaz', description: 'This is the first option when you build with parameters')
    }
    environment{
        FIRST = "$params.PARAM_ONE"
        SECOND = "to.dev.thing.second"
        PI = "3.14159"
    }
    stages {
        stage('First') {
            when {
                anyOf {
                    environment name: 'FIRST', value: 'bar'
                    environment name: 'FIRST', value: 'baz'
                }
            }
            steps {
                firstThing $SECOND
            }
        }
    }
    stage('Second'){
        steps {
            secondThing $PI
            echo "Hello World!"
            script {
                $env.BOB = "Steve"
            }
            echo "Hello $env.BOB !"
        }
    }
}

Enter fullscreen mode Exit fullscreen mode

I've come to find out that the way that the reference pipeline is built is primarily just thing.groovy scripts that execute thing [params] command from the pipeline. I kind of wish I had known this earlier and made use of this functionality because most of my solutions look a bit more hacky. What I wound up doing was to abuse the script sections as steps, and to define groovy functions outside the pipeline scope that I could use to define more complex functionality to keep it from muddying the readibility of the pipeline.

That Seems Easy

What is far more complicated than actually executing any of that development work is that, I don't know if you are aware or not, but pipeline design is so complex that many people lean on other people/teams (specialists) to do it for them. We have two basic types of pipelines that we utilize.

  1. Application Pipeline
    1. Defined by a Jenkinsfile in the repository for that applciation
    2. Doesn't interact with any other applications
    3. Interaction with it is USUALLY triggered automatically from push/merge of the primary branches in Gitlab
  2. DevOps Pipeline
    1. Defined in it's own otherwise empty code repository
    2. Interacts with many applications
    3. Interaction with it is intentional from the 'Build with Parameters' on Jenkins

That's it! It seems really straight forward, right? The issues that we have found ourselves having biggest issues with is deployment that are not in PCF and not triggering automatically from a build. We need to get similar functionality to what we have in PCF (Start, Stop, ConfigRefresh, Status, Deploy, etc).

Without having the reference pipeline that handles setting environment variables in PCF when when deploy into our VM environments we had to get the credentials from Jenkins and store them in the build server for each environment. This had to be carefully considered because while we didn't want to require a build of the application to move new credentials, the number of credentials necessary for each application is different, so if we did it in the second type of pipeline then we would need to define those credentials in two different places and that becomes a tripping point to manage long term. This means that we have to build those environment variable files during build time, and we actually chose to do that for all of the VM environments during each and every build.

Another thing that has taken some time and design consideration is that having to perform a build with parameters for stopping the application in an environment and then another for pushing the code, and then pushing the configuration, and then finally starting the application is a bit absurd. We do need to have each of those activities accessible individually, but now we actually need a deploy action that will simplify the flow a bit.

The Benefits

While our journey with Jenkins is far from over, we are beginning to see some tremendous benefits to our life with the changes that we have. Previously we had a DevOps team that made the changes that we needed with configurations and deploying code into different test levels. With this new flow we are able to do those things at will, and even better is that we can even have someone that isn't comfortable diving into the old build server to perform these activities if needed. Our tester is not a developer by training, and is really excited at the possibility of moving a given build when necessary. We also intend on bringing any activity that we have previously executed in a server directly into a Jenkins pipeline.

The next big thing that we are excited for is having our integration, regression, and stress tests executable with parameters from Jenkins instead of having someone dig around in configuration files and then run the start scripts from the server. A simple build with parameters that says to run 'Stress Test' on 'VM L4' with length of '6hrs' will be able to do all of the things that take nearly an hour presently.

Top comments (2)

Collapse
 
lawrencedcodes profile image
Lawrence Lockhart

That was a good read Corey, thanks for sharing!

Collapse
 
xanderyzwich profile image
Corey McCarty

I just hope that my stories can be helpful.