Are you ready to adopt Service Level Metrics in the True Spirit?

Service Level Agreements (SLAs) and associated metrics can be applied to projects at various levels and for a diverse set of disciplines. In this write up, I am focusing on what it takes to implement such SLAs in the true spirit for QA efforts.


Metrics have long been used to track projects. In a recent discussion I had with a bunch of QA experts, I was asked this question on “What is different about metrics now; aren’t we tracking the same set of metrics as in the past?” Good question; this becomes the basis of my discussion below.


Yes, QA / Test Management has traditionally been tracking metrics such as Defect ratios, validity, Test Productivity, Pass%, code coverage etc. A seasoned test manager used these metrics coupled with his experience to gauge a product’s quality and readiness to ship. However, the metrics were largely boiler plate and in-ward looking, meaning, the standard set was carried forward release after release and they focused more on the executional aspects rather than the product’s business requirements.


In the recent years, this has been changing, where metrics are customized for each release, are dynamic and actionable based on:


– Requirements from the business teams (largely driven by end user requirements)

– Competition in the market

– Performance aspects and not just functional aspects


The test team is closely collaborating with the rest of the product development team to incorporate the findings from these metrics to further enhance product quality. For e.g.:

–        If the page response time for a competing product in the market is better, causal analysis is being done to fix the issue at hand

–        Results from code coverage are further analyzed to work with the devs to remove dead code from the code base


So, the entire team is beginning to understand the value of these metrics rather than seeing them as an overhead. Whether you are a QA team of a product company or a QA services vendor, take a closer look to see what metrics you are using as of today and whether they meet the criteria outlined above. Once you have them designed to meet your product’s needs, the next thing to move to is “Are you able to define service level agreements (SLAs) based on these metrics and also are able to take rewards or penalties for meeting or not meeting these SLAs?” While most companies would have implemented metrics, this is a clear differentiator on whether or not the metrics have been implemented in the true sense. More than the goodness of monetary gain from rewards, for fear of monetary loss from penalties such monetary ties to SLAs have not been implemented at most places. However, this has lately been picking steam as this becomes the ultimate proof of the true value one can get from such metrics. The confidence that the team has in its quality efforts and the quality of the product it is signing off on, is well represented when it is ready to sign up for such monetized SLAs. There are some external dependencies though that one must keep in mind and ensure protocols are built around them to set up a successful monetized SLA model. These include:


  1. Timely inputs on customer requirements including functional and performance requirements, from the Business/Marketing/Product Management Teams
  2. Timely and tight communication protocols with the overall product team especially development teams
  3. Adequate representation from the product development team in review and feedback on core test artifacts
  4. Representation from the test team in making important product and project decisions over the course of the life cycle
  5. A pre-determined budget for testing to empower them to take on the required levels of testing, with the required people, technology and tools expertise


Once you have the core system in place, which has taken into account some of the points mentioned above, you can gradually move into implementing a monetized SLA model. Start with a few core areas that you are comfortable with and once you are able to see how this model works in those specific areas, you can look at expanding the scope to more ambiguous areas. This will help you bring objectivity into such gray areas as well. In a subsequent post I will discuss specific test effort and test quality metrics that will drive improved product and project quality.

About the Author

QA InfoTech

QA InfoTech

Established in 2003, with less than five testing experts, QA InfoTech has grown leaps and bounds with three QA Centers of Excellence globally; two of which are located in the hub of IT activity in India, Noida, and the other, our affiliate QA InfoTech Inc Michigan USA. In 2010 and 2011, QA InfoTech has been ranked in the top 100 places to work for in India.

Related Posts