A job well begun is half done. This is a strong belief in all walks of life – equally so in software quality too. And of the varied areas in software quality, especially in non-functional test areas, a robust performance testing strategy is very critical – it takes a lot of thought, foresight, discussion and brainstorming with varied entities including business, end customers, evaluating competitors, looking at market parameters etc. in arriving at a strategy that is practical yet top notch.
A performance testing strategy, typically taken up by a performance architect or a lead/manager, is done upfront, and will need constant revisit along the way – such revisits not only help identify gaps but also help define actionable items on steps to take forward. This is one area, where once the upfront effort is rock solid, implementation becomes very straight forward – it becomes a matter of a tool understanding, scripting, execution and what actions to take for deviations as everything else is clearly defined in the strategy. Performance strategists are a very sought after set of people – given the effort that goes into upfront definition of a strategy, once that is done, they can become easily transferable to another project/effort, with minimal monitoring on the earlier strategy’s implementation. In case of a functional test strategy, the industry is often faced with the question on the value add it brings in especially when a lot of functional elements change over the course of the test cycle. The strategy may soon become obsolete and maintaining one over the course of the quality cycle is not only a huge overhead but also the resultant value is questionable. However, since performance is only a piece of the pie, but an important piece, as a horizontal, it is still not as elaborate as an overall functional effort – hence the size is manageable, performance parameters and strategy once defined are not subject to a lot of change – all of which make this is a very sought after activity to be taken upfront. The strategy can be of varying levels of detail and complexity some even touching upon test data creation strategies, action items in case of undesirable performance outcomes, besides the standard bench mark values, scripting scenarios, thresholds, scaling patters, load/stress/endurance scope, tool choices etc.
When such a detailed strategy has been defined, software performance testing becomes a very straightforward task making this overall exercise a gateway to smooth application experiences – which has become all the more important across all applications today, where a performance experience can make or break the user’s connect with the product.