By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
Resources
>
Blog
>
Article
A top down cartoon image of two people working on laptops sitting next to each other at a desk.
April 25, 2017

Enterprise Large Batch Releases (Automated Deployments)

Many enterprises regularly schedule large, integrated releases despite knowing that enterprise large batch releases are difficult to manage and present many challenges and increased risk.

Is this something your organization is struggling with? Does your organization have a management team to help bring together multiple development teams with their products and their dependencies to create a “release train” for enterprise large batch releases? Is this coordination difficult, or does it often require changes late in a release cycle? When changes are required to one or more products in the batch, how do you track which products and which dependencies are being delivered?

If you understand these problems and want to know how to better organize enterprise large batch releases, a few changes in how you track and promote software products on their way to production can help.

Enterprise Large Batch Releases

Although some organizations have built pipelines with automated environment creation, it is still very common for an enterprise to build a set of static environments used in a promotion path from development to production. Regular deployments of many applications or services are often required to keep a QA environment up to date throughout a development cycle. Keeping track of these deployments and understanding how all of these products and their dependencies fit together can end up the responsibility of a dedicated deployment team or even in the hands of a single person. Dedicating a team or individual to manage this process can give some much needed attention to this difficult task, but some things can be done to reduce the complexity of deploying multiple products for scheduled enterprise large batch releases.

You can make a few small changes to your existing deployment automation to make this process a bit more robust and repeatable. Let’s look at two ways we can improve it:

  1. Manifest-based Deployments — Use product manifests to track enterprise large batch releases, artifacts, and their versions in each environment in the delivery pipeline.
  2. Scheduled “Trigger” Promotion Jobs — Create automated, scheduled jobs to promote artifacts regularly to pre/non-production environments.

Manifest-based Deployments

Building a product manifest with the approved versions of products can deploy a predictable batch of artifacts to any environment of your choosing. This enables you to know exactly which versions of your application exist in an environment. For instance, you can take all the necessary products to any environment and version that particular set of applications to easily replicate the deployment activities. Using a promotion path to create a pipeline for products on their way to production is extremely helpful to ensure the right versions are where you want them at any time.

What about going the other way? Production back down maybe? You could also use a manifest with current production versions to populate a lower environment with everything needed to replicate current state in order to reproduce a newly discovered defect.

We organize our manifests by filename that describe the corresponding event. In this example, we have a production release named “Jupiter.” Our filename corresponding to this release is “prod-release-jupiter.json.” If you reference your releases by dates or version numbers it could be different. That’s up to you.

A list of different product manifest environments.

Here, we are using a JSON file to represent the manifest. You could choose a different file type but the key here is that we’re keeping track of these in source control. So any updates to the manifest files are tracked and logged. We can be sure that wherever this manifest is used, the exact versions specified will be deployed. We know that for the Jupiter release, it included version 1.1.0 of the login-svc. This file is parsed by an automation script (groovy pipeline DSL) in a Jenkins job. There is one manifest job per environment in our Jenkins instance.

Contents of a JSON file that represents a manifest for the Jupiter release.

These jobs trigger the downstream deployment jobs of each product in the manifest. There is one job per product per environment so the versions are easily tracked. The “manifest-deploy-prod” job would trigger each of the product deployments for the production environment.

The downstream deployment jobs triggered from the manifest deploy prod job.

These deployments can be locked down so that they are only triggered via manifests. You will be able to easily replicate deployments to production by simply using the same manifest for deployments to other environments.

Scheduled Promotion of Artifacts

Creating automated jobs in a pipeline tool such as Jenkins can also deliver products in a predictable way without the need for manual deployment. We can use similar methods to within the automated CI process so our products can deploy as far in our pipeline as we feel comfortable. These automated jobs or “trigger jobs” are used to take products through environments as long as all the required delivery prerequisites such as smoke and regression tests pass in each environment along the way. How might you determine what can be promoted to QA or integration environments? Odds are in many enterprise environments, you’re waiting for a go-ahead or an event to say it’s ok to deploy to an environment higher than dev.

We can determine that a product is ready to promote by automating the validation of features in a lower environment. Depending on the nature of the application, measuring a few things can get you enough information to feel comfortable promoting the application without a lot of manual intervention.

  • Have a code coverage test and report
  • By running automated unit tests and reporting the level of code coverage, your team can agree on a baseline to work from
  • When the team agrees to always increase code coverage of unit tests, this baseline can increase over time
  • With each build, you can feel more confident that your application’s code coverage is not getting worse
  • Run automated tests after each CI build and deploy
  • This one is a must have. With a sufficient number of UI/Integration tests running in an environment, the actual functionality can be validated
  • Start with smoke tests — a small, easily runnable set of core features that show the application supports important functionality
  • Depending on the number of tests and size of the application, you may be able to run more tests in each environment
  • Build more integration and regression tests over time to
  • When you can run every functional test and regression test in a reasonable time window, you can feel comfortable that the application is stable and ready for production

Similar to the manifest approach above, we have built automation to generate parameterized Jenkins jobs that pulls the latest “good” artifacts that exist in one environment and promotes them to another target environment. This is determined by pulling the latest version for each product during a scheduled daily run of the promotion “trigger” job. To make this work, we tie smoke tests to every deployment. If the smoke test fails, the deployment is marked as “failed.” The latest “good” version would be the last one that succeeded both deployment and smoke tests. By ensuring every deployment includes valid smoke tests, we can trust our automation to pick it up and move it forward.

In order to build this feature, we have created a few scripts that describe how the promotion should occur. Essentially, we can create a modified manifest file to help orchestrate the delivery of application code to environments either manually or in an automated way. Here’s a summary of the approach we took to make this work:

  • We have a JSON file that prescribes a deployment destination environment and a set of sources
  • The “sources” are where the products are being pulled from
  • We can pull from multiple environments and specify either the “LATEST” most updated version that corresponds to the latest “good” and tested product
  • The LATEST” tag can be overridden with a specific version if required. In our case we are specifying a specific version of a service and the latest web components
  • The “trigger” job created from this JSON file via the automation seed job script simply triggers other jobs we have already created

Groovy Jenkins Job DSL scripts that parse the JSON file and create Jenkins jobs allow us to have as many promotion jobs as required built via automation. There is one promotion “trigger” job per JSON file after the automation runs. The job will be updated when the automation management job is run.

Screenshot of Deploy Trigger Job code

Final Thoughts

Ultimately, the same deployment jobs that deploy products to environments normally are triggered so that the downstream smoke tests will run. What we’re doing differently with enterprise large batch releases is passing the version from one environment to the deployment job for another environment. This process allows us to schedule environment refreshes or daily builds to QA, integration, or other environments as we see fit.

If you have any comments or questions, reach out to us @liatrio.


Latest From Our Blog

who is liatrio?

the Enterprise Modernization Consultancy

Enterprises need consulting that will actually challenge them, drive them to succeed, and own their own destiny.

Modern challenges require more than just tools and technology — It's about evolving how you operate, think, and deliver. Our unique combination of modern engineering talent combined with transformational practices enables enterprises to achieve long term success in their digital transformation journeys. That's why we're not just any DevOps Consultancy — it's why we're THE Enterprise DevOps Consultancy.

Ready to get started?

Contact Us

We'd love to learn more about your project and determine how we can help out.