Building pipelines by linking Jenkins jobs

Continuous integration servers have become a corner stone of any professional development environment. By letting a machine integrate and build software, developers can focus on their tasks: fixing bugs and developing new features. With the emergence of trends such as continuous deployment and delivery, the continuous integration server is not limited to integrating your products, but has become a central piece of infrastructure.

However, organizing jobs on the CI server is not always easy.

This blog post describes a couple of strategies for creating dependent tasks with Jenkins (or Hudson).

On keeping things small

To make your continuous server really efficient for your team, you need to give feedback to your development team as fast as possible. After a commit (or a push), we all wait for a notification to make sure we didn’t introduce any obvious bugs (at some point, we’ve all forgotten to add a file to the SCM, or introduced a wrong import statement). However, builds tend to be long for enterprise applications; tests (both unit and smoke) can take hours to execute. So it’s imperative to reduce feedback time and therefore, to divide massive jobs into small and fast units.

For example, a regular build process would generally take the following actions (for each module/component):

  • compile the code
  • run unit tests
  • package the code
  • run integration tests
  • extract quality metric
  • generate reports and documentation
  • deploy the package on a server

It’s clear that carrying out all those tasks on a multi-module project can take a considerable length of time, leaving the development team waiting before they can switch to the next task. It’s not rare to see a Maven build taking (just for the compilation / test / packaging) up to 30 minutes.

Moreover, smaller jobs offer much more flexibility. For example, one can restart from a failing step without restarting the full build from scratch.

A true story : Restarting a test server

Recently, in one of our projects, we had to clean up and re-populate a test server. In the first version, a script was executing the following actions in one massive job:

  • Stop the server
  • Clean up the file system
  • Drop tables
  • Create tables and populate with test data
  • Start the server

The process was taking more or less 10 minutes, and in case of failure didn’t allow restarts from the failing step.

In a second version, each action was executed in its own job. However, jobs were not dependent on each other, which required job/wait cycles of 10 minutes.

Jobs to achieve our second scenario

Finally, we linked jobs together to trigger the whole process by simply starting the first job. To create those dependencies, we tried three approaches of which two were successful (I’ll let you guess about the third one).

One-to-One Relationship

The first approach was pretty simple: one job triggers another one after successful completion. So in our case:

Stop server 
    |-> Clean up filesystem 
           |-> Drop database 
                  |-> Create table and insert data 
                        |-> Start server
INFO: Pipeline Blog Post - Stop Server #3 main build action completed: SUCCESS
07.11.2011 15:12:20 hudson.model.Run run
INFO: Pipeline Blog Post - Cleanup #3 main build action completed: SUCCESS
07.11.2011 15:15:27 hudson.model.Run run
INFO: Pipeline Blog Post - Dropping tables #3 main build action 
completed: SUCCESS
07.11.2011 15:17:29 hudson.model.Run run
INFO: Pipeline Blog Post - Creating tables and Inserting data #3 main build action completed: SUCCESS
07.11.2011 15:22:31 hudson.model.Run run
INFO: Pipeline Blog Post - Start server #3 main build action completed: SUCCESS

To achieve these sorts of relationships with Jenkins, you can either configure a post-build action starting the next job (downstream) or configure the trigger of the current build (upstream). Both ways are totally equivalent.

Triggering a job after successful completion of the current job

This first way is probably the most intuitive. It consists of configuring the job triggered after the current job. In our scenario, the ‘stop server’ job triggers, on successful completion, the ‘cleanup’ job. To configure this dependency, we configure a post-build action in the Configure page of the ‘stop server’ job:

Trigger a build once the current job finishes

Starting the current job after completion of another build

With this second way, you configure which job triggers the current job. In our scenario, the ‘cleanup’ job is triggered after the ‘stop server’ job. So in the Configure page of ‘cleanup’, in the Build trigger section, we select the Build after other projects are built and specify the ‘stop server’ job:

Using this one-to-one dependency approach is probably the simplest way to orchestrate jobs on Jenkins. With such easy configuration, decomposing complex activities / build processes is very simple. You can restart the stream from any point, and get feedback after every success or failure.

Über Job: the wrong good idea

One attempt to optimize the previous method was to create a kind of über job, triggering all other jobs. Unfortunately, even if it was a brilliant idea on paper, it doesn’t work. Indeed, there are two issues:

  • Even if a job fails, others are triggered
  • The job execution order is not deterministic

The first point is simple to explain: the jobs are not interconnected, so even if one fails, others are still executed. This can be really annoying if the initial requirement of a job is not set up correctly.

07.11.2011 15:32:51 hudson.model.Run run
INFO: Pipeline Blog Post - Uber Job #6 main build action completed: SUCCESS
07.11.2011 15:32:59 hudson.model.Run run
INFO: Pipeline Blog Post - Stop Server #5 main build action completed: SUCCESS
07.11.2011 15:33:01 hudson.model.Run run
INFO: Pipeline Blog Post - Cleanup #3 main build action completed: SUCCESS
07.11.2011 15:33:03 hudson.model.Run run
INFO: Pipeline Blog Post - Dropping tables #3 main build action completed: FAILURE
07.11.2011 15:33:05 hudson.model.Run run
INFO: Pipeline Blog Post - Creating tables and Inserting data #3 main build action completed: SUCCESS
07.11.2011 15:33:07 hudson.model.Run run
INFO: Pipeline Blog Post - Start server #3 main build action completed: SUCCESS

In this log, the ‘drop tables’ job failed. We would expect the whole scenario to come to a halt at this point, but unfortunately that’s not the case. When this happens, we can’t be sure of the resulting state on our restarted server.

The second point is more tricky. Jenkins is built on an asynchronous model: jobs are scheduled and will be executed later. The order of execution can depend on a lot of different parameters, so the order is hard to plan. In our case, we have seen:

07.11.2011 15:32:51 hudson.model.Run run
INFO: Pipeline Blog Post - Pipeline #18 main build action completed: SUCCESS
07.11.2011 15:32:59 hudson.model.Run run
INFO: Pipeline Blog Post - Stop Server #15 main build action completed: SUCCESS
07.11.2011 15:33:01 hudson.model.Run run
INFO: Pipeline Blog Post - Cleanup #13 main build action completed: SUCCESS
07.11.2011 15:33:03 hudson.model.Run run
INFO: Pipeline Blog Post - Creating tables and Inserting data #13 main build action completed: SUCCESS
07.11.2011 15:33:05 hudson.model.Run run
INFO: Pipeline Blog Post - Dropping tables #13 main build action completed: SUCCESS
07.11.2011 15:33:07 hudson.model.Run run
INFO: Pipeline Blog Post - Start server #13 main build action completed: SUCCESS

You can see that the tables were dropped after the data was inserted. Well… I’ll let you guess the state of the server after this execution.

So, even if this method seemed to be a good idea, it’s actually a pretty bad idea if you want your process executed reliably.

A bit of optimization: Fork and Join

This last method uses a Jenkins plugin named ‘Join plugin’ (see the Join plugin page). In brief, this plugin allows you to configure fork/join patterns: once downstream projects are completed, other projects are triggered.

If you have jobs that can be run in any order, this plugin will reduce the amount of configuration you need. In our scenario, in the ‘stop server’ job, it can be used as follows:

So, the ‘stop server’ job triggers the ‘clean up’ and ‘drop table’ jobs. Once those (independent) jobs are completed, we trigger the data insertion and restart the server. In our experience, these two ‘join’ jobs were always executed in right order (and on the same executor). But we recommend triggering only one join job with a one-to-one dependency:

         
            /-> Cleanup    -\
    Stop server              * ->  Insert data -> Start server
            \-> Drop table -/

This approach reduces the number of builds to configure, but should be used only if your jobs are independent.

A last tip

Especially in the one-to-one approach, the pipeline can become pretty long. Jenkins has a nice plugin to visualize the downstream builds: Downstream buildview plugin.

We recommend using this plugin to track the progress and visualize the result of complex/long pipelines.

Conclusion

Even if the advantages of splitting a long build process into small jobs are obvious, doing so may be more difficult than expected. This post has presented several ways to create dependencies between your jobs for Jenkins/Hudson.


Comments

10 responses to “Building pipelines by linking Jenkins jobs”

  1. There is an option in the config job page where you can configure the retry for job in case of failed and checkboxes if you still want to build the job in case of failure.

  2. Is there a way to make the Join Trigger to work even if one of the downstream jobs failed but successful on retry?

  3. tomekkaczanowski

    thank you, this is really useful!

  4. […] a source code update is enough. If you want to divide this job into several steps, have a look at this post. In our example, we deploy the application to a Nexus […]

  5. I used to employ a system where formal release candidates were produced separately to my check-in builds (also known as “snapshot” builds). This encouraged people to treat snapshot builds as second rate. The main focus was on the release builds. However, if every build is a potential release build, then the focus on each build is increased. Consequently, if every build could be a potential release candidate, then I need to make sure every build goes through the most rigorous testing possible, and I would like to see a comprehensive report on the stability and design of the build before it gets released. I would also like to do all of this automatically, and involve as little (preferably none at all) human intervention as possible.

  6. Well written. Thanks for this post 🙂

  7. Even if the advantages of splitting a long build process into small jobs are obvious, doing so may be more difficult than expected. This post has presented several ways to create dependencies between your jobs for Jenkins/Hudson.

  8. Very good information for beginners.

  9. Hi,
    Thank you for your nice information. I like your article.
    Thanks.

  10. Venkatesh

    very good blog for a beginner..

Discover more from akquinet - Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading

WordPress Cookie Notice by Real Cookie Banner