By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
Resources
>
Blog
>
Article
3 pad locks under red and green lights, with a chain connecting them all.
July 30, 2021

Just in Time Change Approvals Enabled by GitOps

This blog discusses how we’ve used GitOps for non-traditional software delivery patterns bringing typical use cases found outside the normal SDLC flow closer to the code.

Since the term was popularized in 2017, GitOps has pushed the use of version control systems beyond the traditional application development teams and application source code. It adopts the “everything as code” mantra; expanding the use of Git to infrastructure and application deployment processes. With Git as the single source of truth, there is only one place to look to find everything about your application deployment - source code, deployment process, environment configuration, etc. While the term was originally associated primarily with kubernetes-based deployments, it can easily be applied to non-container workloads as well.

GitOps provides a solid frame that helps improve continuous delivery practices in your pipeline. Because each environment is represented by a separate branch in version control, deployments are controlled by pull requests through each branch. When the pull request is merged, a deployment pipeline orchestrates the artifact promotion and application deployment. This provides the flexibility to automate as much as the team is comfortable with while allowing for manual approvals when necessary. Branch permissions can be configured to satisfy enterprise requirements around the separation of duties. For example, development teams can have full control over the content, configuration, and timing of deployments to the dev environment. However, before the deployment to production can occur, it needs to be approved by the QA and Release Management teams.

While we recommend peer review of all changes, there is a fine line to walk between ensuring quality code is delivered consistently and introducing unnecessary bottlenecks into your delivery process. Some of this is mitigated by quality automated testing baked into the integration and deployment pipelines. However, there are cases, such as dealing with sensitive data and compliance regulations, where further controls are often required to ensure proper access is only granted with the proper scope.

Just in Time Approvals

In a traditional enterprise environment, access requests are created in a separate ticketing system, such as ServiceNow, where they can be approved by different levels of management or information security teams. This often involves hours or days in delays and can introduce discrepancies when changes are applied manually after approvals are granted.

Instead, we recommend storing these access requests alongside other application configurations. This provides visibility into the current state of the configuration; not simply what has been requested. It can also provide continuous validation that the configuration does not drift from what was specified in the source control. The problem that arises is how to balance obtaining the necessary approvals without introducing a bottleneck to every single change.

Using Terraform to apply the access requests, the deployment pipelines can recognize when these changes are being made and inject the additional approvals on the pull requests to deploy the changes. This allows for standard peer reviews to occur for standard changes in the configuration, provisioning new infrastructure, or deploying new code. When changes impact sensitive data access, existing approval processes can be added.

How it Works

We have created a project in Azure DevOps (ADO) with a repository containing some basic Terraform to provision new resources in Azure. We will apply this Terraform using a service connection in a pipeline. Leveraging security controls both in Azure and ADO, we can ensure that changes to the environment can only be made by the pipeline and changes can only be merged once the validation pipeline runs successfully.

First, we need to create the Terraform to provision our new environment resources.

Then we need a basic yaml pipeline definition to validate and apply the Terraform. The conditions seen below ensure that we validate any changes when pull requests are created and we only apply the changes once the pull request has been merged to the appropriate environment branch.

Once we have a successful pipeline execution as a prerequisite for a pull request to be completed successfully, we can add additional conditions to the pipeline. For our use case, we need to flag when there is a new or modified access request. In Azure, access is granted based on a scope, a role, and a principal. The scope defines what resource or level the access is being granted at (e.g. subscription, resource group, a specific resource, etc.). The role defines what the user or group will be able to do (e.g. owner, contributor, reader, etc.). Lastly, the principal points to a particular user or group in Azure Active Directory that will be granted the given role at the given scope. Because we are using Terraform and the azurerm provider to establish role assignments, we can simply grep the planned output for the azurerm_role_assignment resource.

When any changes are found we can comment on the PR and add an additional required reviewer. NOTE: This is being done with the ADO REST API. We would prefer to use the az CLI to add the reviewer as it would make for cleaner code in the pipeline. At the time of writing, though, the az CLI only supports adding optional reviewers to pull requests. An optional reviewer will not prevent the pull request from being completed without approval.

Now we can test adding a role assignment to grant `Storage Blob Data Contributor` to a group in Azure AD.

Testing adding a role assignment to grant `Storage Blob Data Contributor` to a group in Azure AD.
Link to code

When the pull request is opened to merge this change into the environment branch, the pipeline execution will recognize the change and add additional reviewers.

The pipeline execution checking for role assignments.
Link to code
The pipeline execution recognizing the change and adding additional reviewers.
Link to code

There is one caveat to mention here. Required reviewers added ad-hoc can still be made optional or removed altogether. Only reviewers required by the project policies cannot be modified. Because of this, when role assignment changes are made, the pipeline execution fails until it recognizes an approval from the appropriate security group. The flow looks like this:

  1. Developer makes a change to role assignments and opens a PR
  2. Pipeline executes to validate the changes. It recognizes the role assignment update, adds the security reviewer, and fails execution
  3. Security team reviews and approves the changes on the PR
  4. Developer re-queues the pipeline or re-runs failed jobs in the previous execution
  5. Pipeline finishes successfully and the developer merges the PR

While this does introduce dependencies on additional teams, slowing down the development flow, it meets enterprise requirements for access requests without pushing them to another system. Now, role assignments are defined alongside the code with Git as the source of truth. We can also rely on the pipeline to make the appropriate changes (and maintain them going forward) in an automated manner without any manual user intervention. This helps prevent potential configuration drift over time.

The added bonus to all of this is the auditability of the changes. Since the pipeline is maintaining all role assignments, we can lock down the environment to prevent any external updates. All changes can be viewed clearly and concisely through the PR and Git history alongside the approvals to let them happen. Because we leverage the GitOps approach to deployments, each environment gets its own set of approvals as well, ensuring that all access to the system is tracked correctly.

Final Thoughts

We always push to improve flow for teams, minimize blockers and dependencies on others, and automate as much as possible. This doesn’t mean we can ignore enterprise requirements for controlling access requests. Using an approach like this brings teams along on the journey towards automating the entire process while establishing guardrails for the types of changes that need additional oversight. Coupled with other functionality like service connection security policy, extending pipeline templates, and out-of-band security scanning on changes to the environment, teams can feel confident in the security and auditability of their infrastructure and application deployment automation.

Whether you are struggling with too many manual processes, modernizing your legacy applications, or transitioning to the cloud, reach out to see how we can help!

Latest From Our Blog

who is liatrio?

the Enterprise Modernization Consultancy

Enterprises need consulting that will actually challenge them, drive them to succeed, and own their own destiny.

Modern challenges require more than just tools and technology — It's about evolving how you operate, think, and deliver. Our unique combination of modern engineering talent combined with transformational practices enables enterprises to achieve long term success in their digital transformation journeys. That's why we're not just any DevOps Consultancy — it's why we're THE Enterprise DevOps Consultancy.

Ready to get started?

Contact Us

We'd love to learn more about your project and determine how we can help out.