By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
Resources
>
Blog
>
Article
The Jenkins logo
February 6, 2017

Jenkins Automation: Background, Configuration, and Usage

Jenkins main focus is builds — everything is a build job — but since it’s a service that needs to be hosted somewhere, it falls somewhere in between developers and operations.

Background on Jenkins and Jenkins Automation

In traditional waterfall IT organizations, Jenkins existed in precarious position. Jenkins main focus is builds — everything is a build job — but since it’s a service that needs to be hosted somewhere, it falls somewhere in between developers and operations. Developers needed integration environments to run their applications, and would set up Jenkins to accomplish this. They would use Jenkins to build on code checkin and possibly deploy to shared dev servers.

If operations wasn’t involved in setting Jenkins up, they wouldn’t trust the artifacts that came out of it. This would often cause operations to do their own builds of the developers code once it was ready to be released. Because Jenkins wasn’t an operations priority, it wasn’t built-proof. Manual setup was easy since it’s just a jar, so it can run anywhere. Most jobs on these Jenkins instances were just maven build jobs. The artifacts would be installed into Jenkins’s .m2 directory and be consumed by products that depended on those artifacts on other Jenkins builds.

Deleting jobs is meticulous in Jenkins. Unless one is using scripting, one job needs to be deleted at a time. This is why Jenkins development was not a common thing. Since everything was manual, if a setting was changed that would break things (for example using a different version of maven) that user would see the error and hopefully realize what was changed. Plugins such as “JobConfigHistoryPlugin” helped debug changes when breaking changes were introduced. For every need that arose, there appeared to be a Jenkins plugin to accomplish the task. More and more often though, a plugin would cause issues on Jenkins due to dependency versions, or it was just not a well written plugin. This introduced to the need to have a sandbox Jenkins to verify that new changes could be verified on the sandbox instance.

The Need for Jenkins Automation

Fast forward a couple of years, and Jenkins is no longer just doing builds. It’s normal for it to be deploying to higher-up environments and to production. Jenkins has moved from a developer or release manager problem to an operations problem. Jenkins is no longer sitting on an old PC under the tech lead’s desk; it’s in a locked server rack in a colocation data-center. Since Jenkins is now an operations tool, it’s required to be up 99% of the time. With this additional responsibility, Jenkins needs to be as locked down as other application servers since one person’s mistake could be make it unusable.

At this point, Jenkins is running not just build jobs and deployment jobs. Smoke-test suites are now needed to validate that deployments succeeded and that applications started up correctly. In an enterprise with 12 environments, 25 products can quickly turn into 700 jobs. Managing these jobs manually is next to impossible, which is where Jenkins automation comes in. Jenkins DSL allows the ability to write jobs in code.


Jenkins automation means jobs can now be added via code or creating a config to be parsed by code. Adding one environment name to a list could create a new deploy job for every product. This is double-edged sword. As easy as it is to spin up hundreds of jobs, it’s also easy to make a mistake and impact or even delete hundreds of jobs. The Jenkins job creating scripts and configurations become their own product since they are just as delicate as the products they manage and build.

Once this workflow is set up, it works quite well. The workflow has huge benefits over manually created jobs, but jobs generated by code are no different than the jobs that could be setup manually. We link jobs together and create other jobs to promote version from one job to another, but they are still just Jenkins jobs that were originally designed for just building maven projects 10 years ago.

Moving Forward with Jenkins Automation

Jenkins Pipeline is a Jenkins plugin that no longer uses jobs. Instead, you declare a pipeline with code that has stages. Stages provide similar functionality to jobs, only are much more flexible. Each job is limited to its own workspace, whereas stages can use the same a or a new workspace.

It lays out strategies to replicate Jenkins Jobs in lower sandbox environments:

  1. Same jobs running local as in Sandbox and Prod.
  2. Environment variables determine environment and swap out functionality/URLs, etc.

Sandbox Jenkins

Jenkins jobs, plugins, groovy versions, etc., need to be tested, not just locally. It’s important to have as close to a copy of production in another environment (or 2) to verify changes. I have 700 jobs on my local vagrant Jenkins to eye configurations, test plugins, etc., but I disable them all by default since enabled them would be a resource hog. With a prod-like environment, you could actually have all your jobs run.

If this were AWS or Facebook, I’m sure there would be a full prod-like environment to verify that anything can be built and deployed to destroyable containers. At most companies this is unfortunately not the case;, especially when it comes to licensed software. In this case though, there’s a ton of value that can be taken away from running as much as possible in non-prod environments. This goes back to how one change can render in Jenkins server ineffective.

Like any piece of software, Jenkins is vulnerable to dependency hell. Plugins often depend on other plugins. Upgrading one plugin can require another plugin being update, which could cause another plugin to no longer work. In recent years this has become less of a problem, yet going from jenkins 1 to jenkins 2 was no easy task. This can get especially messy if the groovy dsl syntax changes from one version of a plugin to another. For example, the HipChat dsl changed slightly from v1 to v2. When running the job dsl with the updated plugin, the job dsl broke requiring the code to be updated. This can all be verified on sandbox before updating production. BUT, it would require job dsl being run in the sandbox environment.

Environment Configurations in Jobs

As easy as it is to hard-code production urls in code, this becomes a problem as soon as there is a second Jenkins instance in the sandbox environment or even locally. We have found that the best way to encourage local development is to make it easy as possible to develop locally. Following the idea that anyone should be able to clone the pipeline repository and run it locally, the pipeline’s default configurations should set for local development. Running `mvn deploy` or kicking off a manifest deployment should not deploy to the production artifact repository, or anything along those lines.

Docker Docker Docker

Virtual machines were effective because they allowed more flexibility than using bare-metal, but at a cost. Virtual machines are big and heavy, even with modern networking speeds. VMs tend to be static in enterprises, but a few widely used languages require the ability build with different versions of a language. Take NodeJs for example. Some apps still need to be built with 0.11, some with node 4, and some of the newest apps needs to be built with node7. To get around the lack of ability to create and destroy VMs with different node versions, NVM exists which allows the ability to use different node versions on the same server. This works but it’s not easy to setup and requires an additional dependency.

Vagrant finally provided an effective way for reproducing production-like environments locally, but not without problems. We’ve found spinning up Jenkins node locally on vagrant to be quite delicate. It required everyone to have the same version of vagrant, virtualbox, chef, and berkshelf for our vagrant boxes to work correctly. Between that and how much of a resource hog running multiple vms turned out to be, it wasn’t worth it for many developers to run Jenkins locally. In comes docker. Docker is so fast (benefits of being light-weight) that we can create a new container for every build. This allows that container to be as customized to be as simple (just java and maven) or a more complex container with dependencies for headless browser testing.

Jenkins Automation Summary

Here, we talked about how Jenkins automation is the key to an enterprise Continuous Integration and Continuous Delivery strategy and how Jenkins automation has evolved in the last few years. Jenkins 2.0 and the concept of pipelines will really change how builds and deployments are done and the concept of what jobs mean. We also touched on Docker and how to take advantage of it.

If you have any comments or questions, reach out to us.


Latest From Our Blog

Ready to get started?

Contact Us

We'd love to learn more about your project and determine how we can help out.