Non-functional test areas are gaining much deserved prominence in recent times often at the same level as that of the functional test scope. In the non-functional bucket, performance testing is one of the core areas that teams are focusing off late, given the changing end user expectations, of wanting to use magically responsive and ever available applications. A solid performance tester is always a subject matter expert that teams look to nurture as an in house resident, as finding such a person is not easy – this is an individual who understands the product domain, the overall concept of quality, the product workflows, automation scripting, end user and their consumption patterns, savvy communication with business teams, the need for close collaboration with functional testers as well as developers, system architecture and code internals, competitor baselines…this list goes on.
Finding an individual laterally, to suffice all these requirements may often be very difficult. Growing such a performance architect and tester internally and organically within the team is what will make the most sense. And one of the core reasons why so much is expected of such a tester, is that results from a performance test effort are often very cryptic. They are not very intuitive to read and comprehend but for the trained eye of the performance tester. To ensure a performance test effort outcomes are meaningful, relevant and actionable, the performance tester plays a significant role.
The larger question here is where does the performance tester’s role end? Is it at the stage of executing the designed performance test scenarios, collecting logs, analysing results and reporting bottlenecks and defects if any? Or does it extend further into the zone of understanding and reporting where in the code base the issue actually lies and what the proposed fix should be. There is no definitive answer here and it really depends on the group, the product and the organization. Either of the two stands can be chosen – the important thing here is while in a functional test effort, a deeper position such as looking at the code base and reporting where the issue is and what the fix needs to be, may be seen as a role trespass into the developer’s zone, it is seen as a very welcome trend in the performance space.
From the tester’s standpoint as well, while such an effort might have been detrimental to his/her unbiased thought process of what to test, from a performance test standpoint, it is perfectly acceptable. And the main reason for this is that, the core performance test scenarios are not very complex – the testers don’t need a lot of creative and out of box thinking. They need to mainly focus on the core product/application workflows. All of the tester’s knowhow and skill is primarily leveraged in scripting and executing the tests, debugging issues, defining baselines, understanding gaps and looking at the code to determine where the issues actually reside to fix them to closure and re-verify that there is a performance improvement due to this entire process. A tester should thus see himself omnipresent in the performance testing space and should be ready to dive into any zone required without any bounds to ensure a successful performance test effort. Not all of these may be possible at the get go and some of them will require sophisticated tools to be used – nevertheless, the core is to come in with an unbiased mind-set that knows no bounds.