API testing is a very commonly used practice in quality engineering. Lately, it is being used for functional testing not just by the quality teams but also by development teams as after unit testing, this is one that helps everyone catch issues faster and closer to the root cause at a very granular level. API testing while largely still functional in nature has been increasingly used for other non-functional areas too, especially security and performance testing. On the security front, for example, the OWASP which thus far mainlyfocused on top ten OWASP vulnerabilities from a web standpoint, has now expanded coverage to top API vulnerabilities too.
Coming to the performance piece, teams have so far mainly focused on end to end software performance testing keeping business scenarios and benchmarks in mind. And typically in the past, such executions were taken up post rounds of functional testing to ensure the product is stable enough functionally, so that performance scenarios can be looked at without any unwanted blocks. While this is still partially true, a lot has changed on the performance testing front. Testing is taken up inSprint, performance testing scenarios focus not just on functional performance testing, but largely even unit performance testing and API performance testing. For instance, recently, our team had a discussion with one of our clients, on the API performance testing they were doing to help them understand what is being done right, what the gaps are and what can be done better. Here are a few core points from the same, which would benefit anyone taking up API performance testing:
- Define API execution counts based on number of end points in each API, rather than just the API itself, as several APIs may have more than one end point. Some teams may calculate just the overall number of APIs which will not give a realistic execution pattern
- Evaluate how long peak usage is experienced in production, during a bench marking run, in defining your load distribution pattern
- Use assertions in performance testing scripts as well. Even though these may be functional assertions that were validated during functional test runs, these help with debugging and troubleshooting
- To ease the functioning of load generator resources, optimization in listeners is a good strategy
- As always, production environment and database replica should be achieved in the test environment too
The good thing with API performance testing is, it is far more actionable, modular and understandable than even end to end functional performance tests, given the very nature of an API, which makes it easier to take on sooner, in the Sprint itself. It definitely does not replace the ultimate end to end performance tests, that align and showcase performance against business scenarios and requirements, but is a great early start to catch and fix issues, making it a more reliable effort when the end to end runs happen, horizontally across Sprints later on. And the good thing is that with just a little additional understanding and effort, the API performance testing can be taken up by the API functional testing team itself – of course they will need some guidance from the larger performance testing team to ensure the effort aligns with the overall software performance testing goals – but they can definitely bring in this value early on, enabling the performance test SMEs to focus on the more tricky and bulky tests. This mutual support system, makes this effort a good software performance testing service model, even if entirely handled internally by a group of SMEs and core quality team.