Jumping into continuous delivery with Jenkins and SSH

/

by

Reading Time: 7 Minuten

Let’s imagine the following situation. You’re working on an application for a customer. Despite a firm deadline and a roadmap given to your customer, he’d like to check the progress regularly, let’s say weekly or even daily, and actually give you feedback.

So to make everybody happy, you start releasing the application weekly. However, releasing is generally a hard-core process, even for the most automated processes, and requires human actions. As the release date approaches, stress increases. The tension reaches its climax one hour before the deadline when the release process fails, or worse, when the deployment fails. We’ve all been in situations like that, right?

This all-too-common nightmare can be largely avoided. This blog post presents a way to deal with the above situation using Jenkins, Nexus and SSH. The application is deployed continuously and without human intervention on a test environment, which can be checked and tested by the customer. Jenkins, a continuous integration server, is used as the orchestrator of the whole continuous delivery process.

The principles of Continuous Delivery

Continuous Delivery is basically a set of principles to automate the process of software delivery. The overall goal is to continuously deliver new versions of a software system.

It relies on automated testing, continuous integration and automated deployments. The application is packaged and deployed to test and production environments, resulting in the ability to rapidly, reliably and repeatedly push out enhancements and bug fixes to customers at low risk and with minimal manual overhead.

Continuous Delivery is based on the pipeline concept. A delivery pipeline is the process taken by the code from the developer’s machine to production environments, particularly pipelines defined for the testing stages and the deployment process.

A Simple Pipeline

Continuous delivery can be quite hard to set up for complex systems. We recommend starting with a simple configuration. Let’s take a simple web application deployed on an application server such as Tomcat or JBoss.

The journey of the source code from the developer machine to the test environment
The journey of the source code from the developer machine to the test environment

The code of our application is hosted on a source code management (SCM) server. It can be Git, Subversion or anything else. It’s the entry point of our pipeline.

Our continuous integration server (Jenkins) pulls the code from the SCM. It builds and tests the application. If all tests are green, the application is packaged and deployed to an artifact repository (Nexus in our context). Then, Jenkins triggers the deployment process.

To achieve this, Jenkins connects to our host machine using SSH, and launches the deployment script. In this example, the deployment script is a simple shell script redeploying and restarting the application.

Finally, when the deployment is done, we check the availability of our web application.

Implementing the pipeline

The presented pipeline is quite simple, but works pretty well for most web / JavaEE applications. To implement it, we need a Jenkins with two specific plugins (the SSH plugin and the Groovy Plugin) and a host machine available through SSH.

Preparing Jenkins

The first thing you need is to install the plugins in Jenkins: – The Jenkins SSH plugin allows you to connect to the host machine by SSH and execute commands; the Hudson Groovy Builder to execute Groovy code. We use Groovy to check the deployment result. However, you can use other options (unit tests, nagios…)

Once those two plugins are installed, you need to configure the connection to the host machine. In the Global Configuration page of Jenkins, scroll down to the SSH Remote Host section and add a host. Enter the machine name or the IP address, and the credentials. You can also use a key file.

Add an SSH connection to Jenkins
Add an SSH connection to Jenkins

Once done and saved, it’s time to create our Jenkins Jobs supporting our continuous delivery process. To keep this example simple, we divide our process in two jobs:

  • The first job compiles, tests, builds and deploys the application. If successful, the new application archive is deployed on our Nexus repository.
  • The second job is triggered upon the success of the first job. It connects to the host machine, and executes a shell script. Once done, it executes a simple Groovy script to check the deployment success. We use Groovy for its simplicity in making HTTP connections and retrieving the result. However, you can use plenty of other ways.

The first job configuration is not specific to the continuous delivery pipeline. It’s generally a regular job deploying the artifacts to a Maven repository. So, a simple Maven Job executing mvn clean deploy upon a source code update is enough. If you want to divide this job into several steps, have a look at this post. In our example, we deploy the application to a Nexus repository.

The second job is more interesting. Create a new freestyle project job. This job is triggered upon successful execution of the first job. So select None as Source Code Management and indicate the previous job name in the Build after other projects are built option.

The build is started when the previous build succeeds
The build is started when the previous build succeeds

In the Build section, add a first build step, and select Execute shell script on remote host using ssh. Select your host, and add the script. You can also use a command directly, and execute a script already present on the host.

Launch the deployment script
Launch the deployment script

Preparing the application host

At this point, we have a Jenkins job connecting to the host and executing some commands. To simplify, we focus only on the application deployment and not on the environment setup. So we consider that the host is ready to be used. The deployment script follows this basic pattern:

1) Stop the application
2) Retrieve the new application
3) Deploy the new application
4) Restart the application

First, if the application runs, it should be stopped (except if your application server supports hot-redeployment). Then, we retrieve the application package. This step consists of downloading the latest version of our application from a repository. Once downloaded, the application is deployed, so either copied to a deploy folder, or unpackaged to a specific location. Finally, we restart the application.

Step 1) and 4) are dependent on your application server, but if you’re using Linux upstart scripts, it should be something like:

stop my_application
...
start my_application

Or if you’re using a service:

/etc/init.d/my_application stop
...
/etc/init.d/my_application start

To retrieve the latest version of our application, we’re relying on the Nexus REST API and a script to download the latest version. This script is available here, and so should be available and made executable on your host (Note: this script requires Curl). With this script, getting the latest version of our application is quite simple:

...
download-artifact-from-nexus.sh \
 -a mycompany:myapplication:LATEST \
 -e war \
 -o /tmp/my_application.war \
 -r public-snapshots \
 -n http://mynexus \
 -u username -p password
...

We first specify the Maven artifact to download, and so use the GroupId:ArtifactId:Version coordinates. We use LATEST as version to download the latest version (a snapshot in our case). The -e parameter indicates the packaging type. Then, we indicate the output directory. The -r option specifies the Maven repository on which the artifact is hosted (check your Nexus configuration to find this value). The other options set the Nexus url and the credentials.

Deploying the application (step 3) depends on your execution environment. It generally consists of copying the downloaded archive to a specific directory.

So, to sum up, the following script can be a valid deployment script for a web application packaged as a war file executed on a Tomcat server:

export WEBAPP=Tomcat webapp folder
stop my_application
download-artifact-from-nexus.sh \
 -a mycompany:myapplication:LATEST \
 -e war \
 -o /tmp/my_application.war \
 -r public-snapshots \
 -n http://mynexus \
 -u username -p password
cp /tmp/my_application.war $WEBAPP
start my_application

So, if everything is configured correctly, once you commit a change to your application, this change should be deployed immediately to your test environment.

First, your application is tested, built and deployed on a Nexus repository. Then, a second Jenkins job connects to the host machine and runs a deployment script. This script retrieves and deploys the latest version of the application.

Checking the deployment

An improvement you could make is to check whether the deployment was performed correctly. For that, you can use Groovy. In the second Jenkins job, add a new build step: Execute Groovy Script. In the text area, just do a simple check like:

Thread.sleep(Startup_time)
def address = "Your_URL"
def url = address.toURL()
println "URL : ${url}"
def connection = url.openConnection()
println "Response Code/Message: ${connection.responseCode} / ${connection.responseMessage}"
assert connection.responseCode == 200

This simple Groovy script waits a couple of seconds (until your application is actually started), and connects to your application. If the application response is correct, then everything is fine. If not, the build is marked as failed, and you should have a look. Obviously, this simple script can be improved and adapted to your situation.

That’s it!

This blog post has presented a way to implement continuous delivery for applications using Jenkins, Nexus and a machine accessible using SSH. Obviously other combinations are possible.

Continuous delivery may be hard to achieve in one step, but as illustrated in this post, it can be set up pretty easily to continuously deploy an application to a testing environment.

Thanks to these principles, you make the development more reactive; we can immediately see the changes. Moreover, errors and bugs are detected earlier.

It’s up to you to tune your pipeline to fit your needs. For instance, pushing the application on the test environment nightly, instead of after every change.

akquinet tech@spree is now using continuous delivery principles in several projects. The results are really beneficial. The test campaigns were improved, and thanks to the pipeline, developers can focus on the devleopment of new features and bug fixes, while still seeing their changes immediately.


Comments

10 responses to “Jumping into continuous delivery with Jenkins and SSH”

  1. how it will get “LATEST” version value in the above script i mean from where?

  2. thanks for good article…Quick question…How can I inject String parameters in to the SHELL SCRIPT on remote host?
    Reply would be greatly appreciated.

    thanks!

  3. Oh my goodness! Incredible article dude! Thanks, However I am experiencing issues with your RSS. I don’t know the reason why I cannot join it. Is there anyone else getting identical RSS issues? Anyone who knows the answer can you kindly respond? Thanx!!

  4. […] last blog post (Jumping into continuous delivery with Jenkins and SSH) demonstrated how to build and deploy an application continuously using Jenkins, SSH and Nexus. In […]

  5. Hi,

    That’s a really good idea. However Cargo does not apply on all application server. For instance, we’re often using Play Framework, which propose a different execution model.

    But anyway, I’d like to experiment with your approach.

    Thanks

  6. Hi Ali,

    Using Cargo to support the deployment is a really good idea, and is a valid alternative to DeployIt or even Apache Ace.

    I understand your complains about the SSH connection and the ‘implicit’ knowledge about the host structure. However, this blog post is just a first step.

    Then, if we use the _Cloud_ vocabulary, there is 2 ways to support the deployment steps:
    * the PaaS way, where you just deploy on an already configured, running and perfectly tuned container. It’s where I see the Cargo approach. For testing environment as we described in this post, it fits perfectly.
    * the IaaS way aiming to configure the complete system (from database to application server, network configuration, HTTP frontend…). To achieve this way you can use Puppet, Chef (let’s try to avoid the famous install-server.sh, a tiny Bash script of 2500 lines min).

    The main difference is the control you require. If you know that your system is perfectly configured and is using standard stacks / configurations, the PaaS-way is the right direction. Easily reproducible, deployment supported by the container and so on.

    However, when you try to build not so common approaches or requires some specific configuration, the IaaS-way may be better suited. On the other side, it may be more tricky to achieve. On this blog, we recently have presented how we’re using Puppet for this. We plan to write another post about this soon.

    1. Hi Clément

      Agreed, SSH is probably more interesting if you want to configure the whole container “from scratch”. On the other hand, even in that scenario CARGO has an advantage: you only define configuration options (HTTP port, database connections, drivers, …) and it configures automatically the container independently from its type (Jetty, Tomcat, JBoss, GlassFish, …) and its version. You could therefore use CARGO to configure the container and then “push” it via SSH.

      Cheers

  7. Hi there

    Thought I find the idea interesting, deploying via SSH has certain drawbacks, mostly the fact that you need to know in advance the exact directory structure of the target server, you get very small feedback in the case the deployment fails and I assume it would be difficult to script a deployment which operates on a cluster.

    I would rather recommend you to use Maven2/Maven3 plugins such as Codehaus Cargo (http://cargo.codehaus.org/): you can use it at the same time for the integration testing on your CI server but also to remotely deploy the application on the target server -independently from its software (as CARGO supports many different servers; Jetty, Tomcat, Glassfish, JBoss, WebLogic, …) and does the remote deployment via the target server’s deployment options (HTTPS, JMX, JSR88, …) automagically. Moreover, if the deployment fails, CARGO can inform you of the status.

    Note that you can also use CARGO from a Java API, ANT tasks, from m2eclipse or Gradle. And, of course, as a Codehaus project it is open source and comes at no charge.

    Cheers

Discover more from akquinet - Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading

WordPress Cookie Notice by Real Cookie Banner