Regression Testing is a huge piece in software testing. A lot of the test effort and budget are often expended on this area and for the right reasons that however careful a developer may be in implementing a new feature or fixing an approved bug, there is always scope for something new to be broken. So testers test not just for the core of the bug fix and the new feature but also surround areas which may be impacted and have a sanity set of tests that run on a regular basis, to ensure overall ongoing product stability. These are also great candidates for automation given the mundane nature and repeatability value they hold. However, most regression candidates are functional cases in nature, even to this day, as most test teams see a lot of value in functional regression testing.
Let’s for a second take non-functional areas into consideration. Accessibility, usability, performance, security, localization- non-functional test buckets also carry the same requirement of regression testing when a bug has been fixed in their specific areas. However, the bounds are often much larger than what we think. A functional issue can certainly have an impact on performance. A performance issue can certainly have an impact on accessibility. Given how tightly coupled all these quality attributes are, chances are very high that touching one may impact another attribute in the same area of the code base. A regression test effort should therefore carefully account for not just functionality in the immediately affected area but also non-functional attributes as well as core sanity tests. That said, one cannot necessarily take a very conservative approach too out here and make a regression suite very large and tough to maintain. Often times, time on hand is very limited, resources are limited, every automation run may have to be justified. The smartness of the regression suite is one that best showcases a tester’s optimization and prioritization capabilities as well as true understanding of the product health.
Web Accessibility testing, kind of areas pose another additional challenge – VPAT, Sec 508, WCAG etc. are subject to ongoing updates. The tester herein not only has to account for changes in the product in functional and non-functional areas, but also changes in the industry standards and templates that are used. While these non-functional areas may not appear to be an ongoing effort, given the amount of product changes and keeping track of the guidelines, may end up being a busy task-set on the non-functional tester’s plate through the year. A few things need to come together in arriving at an optimal regression strategy here – this is not an exhaustive set, but an important set to account for, for an area like accessibility that need to take in both extrinsic and intrinsic factors:
- Visibility into the changes in the industry – how frequently are standard and guidelines changes happening and what is a known roadmap for a couple of years down the line
- Visibility into major upcoming product changes and how they may impact the said non- functional area
- Visibility into not just bugs you file in the non-functional area, but also random sampling of high pri-sec functional defects that are taken in, on a periodic basis
- A periodic look into the sanity tests you have built for your area
- Enhancements and new automation additions, wherever possible, especially trying to integrate smoke tests for non-functional areas into existing functional test scripts – refer – https://qa.qainfotech.com/an-automated-itinerary-to-accessibility-testing/ (if interested in a demo of this framework, we can arrange for one)
- Ongoing brainstorming with team to help others as well understand what you do in the non-functional space. Such cross group knowledge sharing goes a long way, in building team bonding, giving you ideas for enhanced scope coverage etc. No test area should work in isolation
With a strategy such as above, you will be able to arrive at a decent balance in ensuring ongoing regression for your non-functional test area, at the same time not having to spend all test efforts in just regression scenarios.