As testers we often think about what new products we need to be prepared to test for – especially as new technologies enter the market, ramping ourselves for such new areas, what tools to use to test them, how to accommodate all of them in a short agile test cycle are all things the testing group is very focused on. However, we often miss the meta point, that as testers we not only need to learn to test for them, but also leverage them to build on our efficiency in possible areas. For example, in the last couple of years, we at QA InfoTech have been talking about how to enhance a tester’s efficiency using Augmented Reality – we have a full webinar on this topic, too. Extending the same principles to data analytics, a tester can benefit from this discipline if the right practices are used in optimizing the test effort.
Today, the tester’s work scope is very huge. He deals with a lot of data – both structured and unstructured. Test scenarios are available everywhere. User feedback is available in abundance which is also an important input for quality analysis. Using past practices, data, product acceptance, current user trends, competitive positioning, scripted and unscripted test approaches, together make the tester’s job both demanding and challenging. How does one sift through all of this data to derive actionable inputs that can feed into his test process and how can one optimize from amongst the huge data sets to determine what to focus on within the limited time and money on hand. Data analytics herein certainly comes in as a boon if the tester is able to sift through and process the information at hand to derive intelligent action items. For example, one can look at the vast end user data using core keywords to understand user patterns and feedback to focus on core scenarios. Similarly, competitive analysis can also be taken up proactively. A/B tests can be taken up using predictive analytics tools to determine end user acceptance for a certain app/product. The options are really endless out here.
One may question – who has the time to really implement such tools and practices in the core testing effort, although we understand the value of it. I am not saying we should go after every new technology to see how they would make the tester more efficient. Some stand out to be the obvious ones like data analytics. In such cases, it makes complete sense for a test practice/R&D team to try a pilot, prove the value and implement across the board, for after-all what is the value of a new technology that focuses just on new products, ignoring the value it can add to existing practices.