Today, enterprise quality is broken, and achieving enterprise quality is an ongoing struggle. Various reasons for this struggle include a lack of engineering focus around quality, too great a focus on delivery dates and tools over frameworks, and analysis paralysis. Most testers are outsourced staff, and the QA FTEs that remain are primarily people managers vs. engineers. In truth, the enterprise release schedule tends to be the worst enemy of speed and quality. As the dreaded release date approaches, a scramble ensues, and all the rigor and processes put into place are circumvented. The focus on quality diminishes as teams settle for “good enough.”
Let’s take a deeper look at why enterprise quality is broken and touch on a few of the sources of the problem.
The first reason enterprise quality is broken is that engineers are rewarded for submitting code into production on time, under budget, and with zero defects. QA teams, in turn, play defense, as their success – along with their associated rewards – is measured by how many defects they identify and how many P1/P2 defects ultimately make it into production. This siloed structure and conflicting sets of rewards inadvertently pit these two groups against each other. More to the point, defensive posturing often complicates the rapid delivery of quality software and erodes good working relationships between developers and QA.
Outsourcing and Manual Testing
A second key reason why enterprise quality is broken is that outsourced staff perform the vast majority of testing in many enterprises. I’ll state the obvious first: outsourcing provides cheaper resources with lower overhead than FTEs. The second, and arguably less obvious, point I’d like to make is that the work being performed in QA organizations is often low value, predominantly because most of it is manual and repetitive. This combination of factors has created a broken system that works against speed, quality, and innovation.
To add to this problem, enterprise QA managers are also often rewarded, promoted, and compensated in accordance with the number of resources they manage. It’s no surprise then that these managers aren’t incentivized to automate manual test cases. From an outsourcing consulting firm perspective, automating manual test cases means reducing total headcount and associated profit margins.
Of course, some vendors are now proposing to implement complex frameworks and are promising the upskilling of manual testers. As it turns out, such frameworks are problematic (and costly) because most non-technical, manual testers aren’t equipped to perform automated QA engineering tasks.
A third reason why enterprise quality is broken is that many enterprises have created a shared service organization for QA in an effort to gain efficiencies and cut costs. Such a structure has compounded the problem by further isolating testing to an organization external to the delivery team (scrum team) and creating siloed handoffs, restricting the flow of work through the SDLC.
Other organizations have implemented a “shift left” methodology, simply shifting test automation resources left into the scrum team and earlier in the development cycle. Those resources are given the task of automating test cases as developers write code. As I’ll discuss in more depth in a future post, the shift left methodology is a flawed solution to defect identification and correction.
Both setups are destined to fail. At worst, I’ve seen a mandatory 14 weeks of QA regression testing after development ends. Put yourself in a tester’s shoes. Testers are measured on quality, but they have little visibility into what’s being built and, perhaps more importantly, why. To further complicate matters, when testers identify defects, developers have to context switch back to code they developed weeks or even months earlier. This vicious cycle continues right up until the release date.
A Broken Framework
A final reason why enterprise quality is broken is that many teams get hung up on big challenges like test data management (TDM), performance testing, and data masking (a method of creating a structurally similar but inauthentic version of an organization’s data in order to protect that data and manage risks during software testing). Let’s be honest – TDM and data masking are complex and time consuming. Layer in the size and complexity of an enterprise, and you have real challenges on your hands.
Typically, TDM, performance testing, and data masking struggle to scale and keep up with the pace of development. Add in analysis paralysis, and you have a serious problem. I personally have seen enterprise data masking take so long that delivery teams are working with scrubbed production data from two years earlier. This delay creates a dangerous gap where delivery teams are out of touch with the reality of production.
Enterprise Quality Is Broken, but It Can Be Fixed
Enterprise quality is broken, but this problem can be addressed and corrected. In future blog posts, I’ll delve deeper into the problems discussed above and identify strategic ways to fix them.