Technology transformation can be a daunting task to take on for any enterprise. Embracing new technologies like scalable infrastructure is more complex than taking your existing applications and deploying them to “the cloud”. Legacy applications need to be refactored and re-platformed to meet the full potential of Cloud Native delivery practices. We created the Kubernetes in the Enterprise Ignite Lab to demonstrate how legacy applications can be migrated to Kubernetes while providing hands-on experience with true delivery transformation. Check out our other blog on Why We Created the Ignite Lab to learn more about what Ignite has to offer.
While we take a deep-dive on converting a monolithic application to a microservice based architecture, we don’t get a chance to explore how we facilitate the lab experience for multiple attendees. More specifically, how do we manage the set of tools that aid in development and delivery of our applications? Enter the LEAD Toolchain. The LEAD (Liatrio Enterprise Accelerated Delivery) Toolchain, is a single set of cluster-shared resources that supports log aggregation, security scanning, certificate/DNS provisioning, and other services that enable developers (in this case lab attendees) to deliver their applications to customers.
The idea that external tools & services are necessary to deliver applications is not a new concept; organizations typically have a wide array of tools at their disposal that developers use to meet certain requirements and aid in development. However, these tools are usually managed across multiple dedicated teams, often with several incarnations within each line of business. The lack of a clear, team-defined toolchain often results in time-intensive steps to on-board a new project. The following scenario is probably a familiar one:
- Open a service ticket for app A to get access to Jenkins
- Request a new SonarQube project is created for app A
- Configure exporters for log aggregation on app A’s dev, qa, and prod servers
- Request new provisioning of load balancers to expose app A
- Wait for each team to complete your requests
- Configure Jenkins job(s) to build, test, and deploy app A to your infrastructure
As you can see there’s a lot of time spent waiting for external teams to complete requests as well as custom configuration needed to be done by the user in order to “plug in” to existing tool instances. Without a clear relationship between delivery tools and the customer products they support, we begin to see new problems emerge:
- Version drift
- Large blast radius in the event of tool failure
- Cost is obfuscated between LOBs (Line of Businesses) that share centrally managed services
When sharing centralized services across many facets of an organization, teams are reluctant to perform updates for fear of interfering with a dependent team’s productivity. System failures are more devastating because many LOBs might be relying on the same set of servers to build, test, and deploy their applications. You also might be reliant on a few key team members who are familiar with legacy configurations because they configured the system in the first place. When failures do occur, organizations are dependent on these employees to “save the day”, creating a bottleneck.
Defining your toolchain configuration as code and introducing automation to provision resources enables organizations to scale these services across many different teams. Instead of submitting requests and waiting for access, teams can take ownership of their own toolchains by deploying it themselves using configuration management tools. As this scales out, toolchains can have a closer relationship with the products they serve therefore eliminating many of the problems that arise from centralized services.
In the Kubernetes in the Enterprise Ignite Lab, we run all of our services with a Kubernetes cluster which allows us to run our toolchain closer to the product itself. Within a single toolchain namespace we can host the services that enable us to build and deploy our Springtrader demo application. Each component or service within the toolchain namespace is designed to support many products within a cluster. (We, as the DevOps community, consider a product to be an application or group of applications that represent a customer-facing service. ) This allows us to clearly dictate the tools used as a part of our cluster administration and provide these services to lab attendees without the need for manual configuration or service tickets. We implement this by leveraging:
- IaC (Infrastructure as Code) using Terraform to provision our cluster, install our toolchain applications, and configure them to support additional lab attendee products
- Kubernetes Controllers to automate provisioning of application resources (ex: DNS & TLS certificates)
- Fluent-bit deployed as a Kubernetes Daemonset to aggregate logs to Elasticsearch
- Kubernetes Validating Admission Webhooks for injecting sidecars to add applications to our Istio service mesh
- A Product Operator to create Harbor repositories, SonarQube projects, Jenkins jobs, and other project resources
An important concept to bring up is the idea of the Kubernetes Operator Pattern. As defined by CoreOs:
An Operator is a method of packaging, deploying and managing a Kubernetes application.
We developed our own Product Operator, which manages various types of products that we define as custom resources in the Kubernetes API. Using custom resources allows us to translate the definition of a product into an API object that can be used like any other Kubernetes resource. We define attributes on our product custom resource that point back to Terraform configuration stored in our Git repos. Now when we create a new instance of a product in Kubernetes, our Product Operator will trigger our predefined automation and provision the needed services.
Deploying our LEAD Toolchain to a single namespace in a cluster keeps the tools “close” to the applications they serve which:
- empowers LOBs to take ownership of their toolchain
- provides insight into the cost of tools per LOB
- reduces blast radius from system failures
Caveat: We currently run the entire Kubernetes in the Enterprise Ignite Lab on a single EKS cluster. While this makes it easier on us in terms of running the lab, some of our LEAD Toolchain implementation wouldn’t make sense in a “real-world” environment. For example, we run Harbor (artifact store) within the same cluster we deploy our dev and production workloads to. A real organization will most likely want separate clusters for each environment(dev, qa, and prod) as well as hardening artifact stores by running externally from all workload environments and performing regular backups. However the main concepts we’ve discussed remain true; keep your toolchain close to the applications they serve and leverage modern technologies to automate the provisioning of application resources.
The LEAD Toolchain is meant to be modular, as the required services will change depending on the organization it is serving as well as the types of applications built by the teams using it. Some of the tools mentioned above may not be available within your organization, either due to policy requirements or simply a preference for different implementations. For example, when we first created the Kubernetes in the Enterprise Ignite lab, we used Artifactory to store our application artifacts. As we began to add more tools to the LEAD Toolchain and host more labs, we decided to embrace Harbor. What we gained by adopting Harbor is a cloud native registry that stores, signs, and scans content in addition to being a member of the CNCF. Swapping out Harbor for Artifactory was as simple as adding the configuration to deploy Harbor as a part of our Toolchain and disabling our old Artifactory instance.
Rather than focusing on the specific tools like Artifactory & Harbor, it’s important to practice the concepts we mentioned earlier like: defining a clear relationship between your toolchain and the apps they serve, automating the provisioning of resources for developers, and isolating dependencies across LOBs.
As mentioned earlier, our goal with the Kubernetes in the Enterprise Ignite Lab was to take a real legacy application from a monolithic architecture to a set of distributed microservices running on a modern platform like Kubernetes. This requires us to solve real problems that enterprises struggle with today, such as how to provide a common set of tools to aid in delivery across many applications. While the specific tools and implementations may vary from org to org, the concepts discussed in this blog can enable your team to get to work without being reliant on external teams and processes. Throughout the entire lab, our LEAD Toolchain is abstracting these tools and allowing attendees to focus on what matters: modernizing their application architecture, and quickly & safely delivering their products. Can you picture how this might work for you and your organization?
Ignite Lab Shoutout
If you are interested in learning more about the Kubernetes in the Enterprise Ignite Lab or would like to attend one yourself, head on over to our Ignite Lab landing page! Also feel free to reach out to us with any questions about bringing these ideas to your organization.