By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
Resources
>
Blog
>
Article
An illustration of a tangle of pipelines.
March 7, 2019

How Enterprises Should Set Up an Infrastructure Delivery Pipeline

Here, we’ll dive into Infrastructure Delivery Pipelines. We’ll discuss our opinion on how enterprises should set up their delivery pipeline to improve the entire organization.

Imagine if you could deliver a pattern and process to an enterprise that enables the building and provisioning of a server or container using an efficient, secure, and compliant process. But wait ... we can do that already! Actually, as it turns out, that’s usually the easy part of the problem to solve.

Many tools are available today to automate the building, configuring, and provisioning of infrastructure. Businesses often think that simply adding configuration management (CM) tools to their infrastructure management practices equates to delivering infrastructure to software engineers. However, CM tools are only the first step in delivering an infrastructure to engineering teams for consumption in an automated fashion as a consumable service.

The disconnect occurs largely because many enterprises move towards automated ways of provisioning their servers and systems but fail to realize that automation involves more than simply replacing their handcrafted infrastructure with CM tools that can configure an instance. They often treat new automated systems just like the physical systems they had before -- like antiques instead of pieces of IKEA furniture. They miss the changes that have to occur to increase efficiency in the system as a whole.

For example, InfoSec may have to run manual tests and manually apply updates to certify a system as compliant and secure before it can be used, resulting in long lead times to deliver a system for use by engineering or production support teams. Enterprises often fail to recognize the fundamental capabilities the automation offers in terms of improving efficiency and enabling engineering teams to deliver high-quality features to customers.

Here, I’ll dive into what we at Liatrio call Infrastructure Delivery Pipelines. In particular, I’ll discuss our opinion on how enterprise organizations should set up their delivery pipelines in order to improve the organization as a whole.

What Is an Infrastructure Delivery Pipeline?

In its simplest form, an infrastructure delivery pipeline is a set of Jenkins pipelines that break down the different components of the infrastructure that an application needs to run on. Each server is treated as an artifact with a set of specific tools running on it, just as applications and services are treated as artifacts. The overall pipeline is actually a collection of multiple pipelines that produce containers, VMs, AMIs, or other virtualized instances.

Business and Engineering Benefits of an Infrastructure Delivery Pipeline

Using pipelines to deliver infrastructure brings several benefits to both the enterprise and, more specifically, enterprise engineers. Pipelines provide an idempotent process to produce the instances needed to run the products they produce for consumers, as well as to run any applications or tools used to run the business day to day. Ultimately, using an infrastructure delivery pipeline provides more consistently configured instances that take less time to provision, reduces defects caused by systems having disparate configurations from development to production, and reduces the time to market for changes from engineering.

Engineers, in turn, gain access to environments and infrastructure in a self-service mode and no longer need to reach out to operations or other groups to provision new systems for use in the development process. They can now provision a server or set of servers on demand to be used for development and testing purposes. In addition, they can test operating system patch interactions with their applications much earlier in the development process, leading to better quality and lower security risk. Operations engineers, in turn, can follow a software development process to produce and update infrastructure and more efficiently provide the infrastructure for engineering teams to use.

Core Tenets of an Infrastructure Delivery Pipeline

  1. Any instance built as a base image contains only packages or tools that are part of the host OS or are common to all instances running across the enterprise.
  2. Any engineer should be able to run this instance on their local machine.
  3. No manual changes are ever made to this instance; all changes made to the configuration of the instance must be in code (hence the term “Everything as Code.”)
  4. No VMs are updated in place; instead, they are replaced with new VMs when patching, or application updates need to be made and become part of the overall release cycle. (One caveat to this rule around database servers: Due to the difficulty of migrating large amounts of data, we typically use automation to update these hosts in place and use a more scheduled process to update the infrastructure supporting the instance.)
  5. All Jenkins pipelines are configured via a Jenkins file stored in source control. The Jenkins console does not need to be used for the pipeline configuration or execution.

Infrastructure Delivery Pipeline Tools

Ansible - Ansible is our CM tool of choice because of its flexibility to support both ephemeral and non-ephemeral hosts and its easy adoption by teams currently managing infrastructure.

Terraform - Terraform is our provisioning tool of choice because it allows us to build provisioning plans that can be migrated across both on-premise data centers and cloud providers.

Packer - Packer is our preferred packaging tool for VMs, AMIs, containers, and other instances.

VMware - Our initial work has involved automating pipelines to work with VMware, a common tool used by enterprise clients.

Artifactory - We recommend Artifactory as a centralized artifact store for use across an enterprise. Artifactory provides a single location to store and obtain binaries and packages for engineering and operations use. It also can be used to run regular vulnerability scans against those binaries and packages.

InSpec - InSpec is used for security validation and remediation of an instance.

Jenkins - Jenkins is the primary job runner for the pipeline and our recommended CI tool.

Three Layers of Infrastructure Delivery Pipelines

Three different pipelines form the Infrastructure Delivery Pipeline: a base image pipeline, a middleware pipeline, and an environment delivery pipeline. I’ll discuss these pipelines below.

Base Image Pipeline

The first step is to build is a base image pipeline in Jenkins. The base image pipeline is used to build a base VM template image, which is produced using several tools, including Packer, Jenkins, and Ansible. The base image pipeline is run infrequently, as it is usually updated only for patching or updates. It should be run on a regular basis to ensure the latest updated version is available for all downstream-dependent pipelines.

A diagram of the base image pipeline.

The steps are as follows:

  • Commit Code - An engineer makes changes to the code for the base OS instance and commits code to a GitHub repo, which triggers a Jenkins pipeline to run.
  • Pipeline Trigger - Jenkins pulls down the latest code from GitHub.
  • Checkout SCM and Pull Dependencies - Ansible scripts for configuring the base OS instance are pulled down.
  • Static Code Analysis - Static code analysis is run against the Ansible code via Sanity Tests in Ansible.
  • Build Image - The image build is initiated on VMware by provisioning a new instance off of a base ISO for the OS. Packer initiates the run of all of the Ansible scripts and installers to set up all of the common tools for the enterprise. Packer then produces a new image instance.
  • Unit Test Image - Unit tests are run against the instance to ensure that all tools are installed correctly and all installed services start up correctly.
  • Security Testing - Security testing is run against the instance to ensure that all of the correct security settings have been applied to the instance. It also remediates any security issues discovered during the scan and ensures that the system meets the required security settings.
  • Publish Image - The new VM Image is published, version tagged back into VMware, and made available for use.

Middleware Pipeline

The middleware image pipeline installs any necessary tools and libraries for specific application dependencies on the base image VM. An example would be installing Nginx on a server with some base configuration that is ready to receive a web application during provisioning. These pipelines should be run on a regular basis to ensure that teams always have access to the most up-to-date versions of their dependencies and services for their applications. The instances produced in this pipeline are provisioned in the environment delivery pipeline.

A diagram of the middleware pipeline.

The steps are as follows:

  • Commit Code - An engineer makes changes to the code for the base OS instance and commits code to a GitHub repo, which triggers a Jenkins pipeline to run. Alternatively, the engineer downloads a new version of a piece software for Nginx to Artifactory.
  • Pipeline Trigger and Checkout SCM - Jenkins pulls down the latest code from GitHub.
  • Pull Dependencies - The latest dependencies for the Middleware image are pulled down from Artifactory, along with any needed Ansible scripts.
  • Static Code Analysis - Static code analysis is run against the Ansible code via Sanity Tests in Ansible.
  • Build Image - The Base OS image from the prior pipeline is spun up from the Packer script, and the Ansible configuration scripts are run to install Nginx. The VM is then repackaged as a Nginx VM image.
  • Unit Test Image - Unit tests are run against the image to validate that Nginx was installed correctly and to verify that the service comes online when started.
  • Security Testing - InSpec security tests are run again to ensure that the latest run of Ansible scripts has not compromised the required security settings and also verifies that Nginx is configured in a secure fashion.
  • Publish Image - The new VM Image is published and version tagged back into VMWare and made available for use.

Environment Delivery Pipeline

Now that you have idempotent base image pipelines and middleware image pipelines, you can create your larger environment provisioning pipeline. This pipeline is the one that most engineering teams will use on a day-to-day basis. In an environment delivery pipeline, you use a Terraform plan to provision all of the servers needed to create an environment, as well as connect to all other external shared resources that developers and testers can use for regular development and validation.

A diagram of the environment provisioning pipeline.

The steps are as follows:

  • Commit Code - An engineer commits new changes for the environment configuration or for the application configuration.
  • Pipeline Trigger and Checkout SCM - Jenkins pulls down the latest code from GitHub for provisioning the environment.
  • Unit Testing and Terraform Validation - Unit testing and validation of the Terraform scripts is performed. Terraform Validate is executed to ensure that the syntax of the Terraform scripts is correct. Terraform Plan is executed to ensure that the functions in the script execute in the expected manner.
  • Provision Environment - Terraform is used to provision all of the instances needed for the environment. The middleware images may be used from the Middleware Pipeline. (Multiple middleware images may be used depending on the environment makeup.)
  • Provision Platforms - Other platforms such as F5 load balancers and database servers are deployed as needed for a fully functioning environment.
  • Deploy Configurations - Any additional configurations that instances need are executed. These configurations can be deployed only at the time of provisioning, such as environment-specific system paths, application-specific configurations, and variables or connection string information.
  • Availability Testing - Availability tests verify that each of the provisioned instances, services, and end points are running and can be connected to.
  • Security Testing - InSpec security tests are run again to ensure the provisioned environment meets the current security requirements set by the business. These tests also verify that the application security settings are set correctly and match the production security settings.
  • End - The environment is now provisioned and ready for engineering teams to deploy application code for development, testing, and or release to production.

Outcomes of Building an Infrastructure Delivery Pipeline

Once you’re running the pipelines, engineers should be able to provision environments that match production configuration for development use and testing. Now they have something that matches production configuration. Your Operations team can run and manage pipelines and make them a consumable service for engineering teams, which expedites the process.

Engineers can request a server when they need it, making it a consumable service vs. opening tickets with the Operations group, which can take anywhere from days to months and result in a server that doesn’t necessarily match the server in production. The number of mismatches between production environments and lower environments should be reduced because the same configuration is being used.

As a result, enterprise organizations should see a shorter time to delivery of features to customers with fewer defects, and engineers should gain more engineering autonomy. Now that all of the infrastructure configuration is checked into source control and the operations and engineering teams have full visibility into the infrastructure/environment configuration, the enterprise will see fewer silos as the operations and engineering teams begin working together closely to ensure all of the infrastructure delivery pipelines are updated, implemented, and improved. In addition, mean time to recovery for a piece of infrastructure should be reduced now that an automated process is in place.


Latest From Our Blog

who is liatrio?

the Enterprise Modernization Consultancy

Enterprises need consulting that will actually challenge them, drive them to succeed, and own their own destiny.

Modern challenges require more than just tools and technology — It's about evolving how you operate, think, and deliver. Our unique combination of modern engineering talent combined with transformational practices enables enterprises to achieve long term success in their digital transformation journeys. That's why we're not just any DevOps Consultancy — it's why we're THE Enterprise DevOps Consultancy.

Ready to get started?

Contact Us

We'd love to learn more about your project and determine how we can help out.