The efficiencies of automation are moving into all areas of IT. This core tenet of DevOps has tangible benefits within traditional operations, system administration, network administration, database management, etc. The ability to derive the changes to our systems and processes from a scripted source of record can remove a huge percentage of the human error from our daily work.
Placing the focus on automation will help us better understand how to test changes before they are made. When you are looking through this lens, it’s easy to see the advantages of making our changes in a local sandbox environment before they are made in production. Where this has been a must for traditional software development teams, expanding this behavior to other functional roles in the organization for changes in all IT systems is becoming more widely accepted as a best practice. We believe the fast feedback loops that exist for software development should exist for all changes in your delivery pipeline. Modifications to your infrastructure, updates to databases, and changes to application configuration can be rock solid and made with the same (or more) rigor as the products flowing through it.
Using a local development practices and a local development environment for building applications is status quo for software developers. As long as software has been a product, there has been a focus on enhancing local development practices. Countless tools are available to help developers write better code. Software developers have been known to build tools to help software developers build other tools. Today, there is no faster way to get feedback than interacting directly with the code in an IDE (Integrated Development Environment) or seeing verbose console output after making changes in your favorite text editor. But does this experience translate to work done outside of the traditional software development lifecycle? Shouldn’t we be taking advantage of these tools to help us with other delivery challenges?
Step 1: Treating Everything as Code
What does “Everything as Code” mean? We believe it means that we can use processes and tools to manage configuration changes in a way that will help teams understand exactly what it takes to configure and support the environments of all IT systems. There’s a good chance you already have a source control tool in your environment and you should be able to use it to track all of these changes. SCM tools like GitHub, SVN, etc., offer obvious benefits for a traditional software development team. They make it easy to track changes to our code or configuration over time. When we expand the use of SCM tools to contain the scripts for installing tools, setting up base images for operating systems, or changes to environment configuration, we are bringing the behaviors we expect of traditional software developers to other operational roles. There is tremendous value in understanding, tracking, and eventually automating every modification to our systems and the key to enabling these behaviors is keeping track of the changes to their settings and configuration.
How do we do this?
- You don’t have to tackle it all at once. Start small: You may start by keeping a run list of manual steps in a text file
- You can add scripts of things that you automate along the way to your automation repository
- Next you may add links to those scripts in the run list, eventually phasing out much of the manual work
- Build in a review process for changes to the scripts (By using an SCM tool, you’re able to easily keep track of each change to the scripts or run list)
- Add additional contributors to your repositories so others can add small changes or improvements via Pull Requests
- Expand the script repository to reference artifacts of specific application versions or “gold” OS images
- Create additional SCM repositories to organize different types of automation
Step 2: Using Local Development Practices and a Local Development Environment First for All Changes
Before you can make a change to a production system, you should test the change thoroughly. Making sure the configuration change is tracked and tested should be something we try to enforce. To enable this, we should have local development practices and testing of any change made to your infrastructure systems or environment configuration.
Let’s take a look at a couple of the most popular tools that help drive this philosophy. The configuration for both Vagrant and Docker can be kept in source control and meet the requirements we set above to help deliver changes iteratively. You just have to decide what approach meets the needs of your product and environment.
Vagrant by HashiCorp
- Vagrant is is a tool for building complete development environments
- Vagrant gives you a disposable environment and consistent workflow for developing and testing infrastructure management scripts
- Quickly test things like shell scripts, Chef cookbooks, Puppet modules, and more using local virtualization such as VirtualBox or VMware
- Use the same configuration to test your scripts on AWS using the same workflow
- Vagrant is also an open source project https://github.com/mitchellh/vagrant
- Get started at https://www.vagrantup.com/
- Docker provides a different way of looking at environments
- Docker is a container platform used to isolate applications and reduce the need for “full installs” of host operating systems
- This is a light-weight way to run applications only relying on dependencies needed at an atomic level
- You can use Docker locally to test changes, in a CI deployment pipeline to test integrations, or in production to increase the number of applications you can run in an environment at one time
- Ideally, you would develop a Docker container locally and use the same one in any environment, minimizing any chance for errors along the way
Step 3: Automating Updates in Other Environments
As you’re making the move to building a suite of ops playbooks and have local development practices to help test your changes, you can take on the challenge of automating the delivery of your systems. By adding a few more tweaks to the pipeline, these changes can be fully automated in any environment.
- Add automated script validation by connecting your repository to a Continuous Integration tool like Jenkins – one of many tools tools to build and test infrastructure
- If you’re using a configuration management tool such as Chef, you can use RuboCop and Test Kitchen to enforce best practices and automate the testing of your configuration change
- After your changes are being tested in CI and producing versioned artifacts, you can begin to confidently schedule updates to environments
- Move secret information such as credentials for your different environments to a secrets management tool such as HashiCorp’s Vault
Where to Go from Here
Learning new tools or implementing new local development practices may not be something you can do overnight, but these things can be added incrementally. Find out where you are in your journey and pick up the next improvement to your pipeline. The more of your tools and environment you can prove out locally, the closer you will be to removing the need for manual changes everywhere else.
If you have any comments or questions, reach out to us @liatrio.