Moving from a commercial to an open source performance testing tool

I was moderating a panel discussion on commercial vs. open source testing tools recently at the Next Generation World Testing Conference. The discussion ended with a message that there will be collaboration amongst these two groups, although they have their distinct differences. However, several people at the conference asked me for leads to move from commercial to open source tools, especially in the area of performance testing.

As an organization, we are largely invested in the open source technology space and have helped clients move from commercial to open source tools. We have detailed case studies on this, but here’s a gist of what such a move entails. Based on a migration we did for a specific customer, the cost savings could be as high as 87% in case a performance benchmarking test and 91% in case of a performance execution at peak time. As in any other effort, in this migration as well, the first is to determine the goal. Why do you want to migrate? Are you looking for cost savings, support from the community, specific features etc. This understanding helps all stakeholders stay on the same page from a requirement standpoint. More often than not, the goal is to reap cost savings. That said, if you are heavily invested in a commercial tool, the legacy involved in the migration may be too high. Not all of the test scripts can be re-used as is. While the workflow related to the script and the test data can be re-used the code often needs to be re-written from scratch. This can be a significant effort to develop and stabilize – the teams need to understand this upfront, as this involves both time and cost.

Also, deciding what open source performance testing tool to use, is an important activity. The open source world is growing significantly by the day, both in terms of how they compare with the commercial world as well as the number of players. The choice of which tool to go with is not always a no brainer. In our case study mentioned here, for example, we have compared four tools before finalizing on JMeter. Tool to migrate to is a big decision and more importantly an expensive decision to revert. So, teams need to invest enough time on this upfront and also look at futuristic trends related to the tool to determine which one to go with.

Once these are analysed, building skills related to a specific tool is usually not that time consuming. Especially, when you have testers who understand the performance testing domain and are familiar with at least one tool, one can ramp up with some minimal effort on another tool, unless the underlying programming language itself is completely different. At our end, we have a research and development team that is invested in such new tools, utilities, frameworks – this team makes it easier for ramp up project teams as needed and hand holds them on such migrations to make the process fast and seamless for our end clients. If you are interested in talking to us in greater details about how we did this migration or understanding our software performance testing services, please feel free to reach out to us at

About the Author

Rajini Padmanaban

Rajini Padmanaban

As Vice President, Testing Services and Engagements, Rajini Padmanaban leads the engagement management for some of QA InfoTech's largest and most strategic accounts. She has over seventeen years of professional experience, primarily in software quality assurance. Rajini advocates software quality through evangelistic activities including blogging on test trends, technologies and best practices. She is also a regular speaker in conferences run by SQE, QAI STC ,ATA, UNICOM, EuroStar and has orchestrated several webinars. Her writings are featured in TechWell, Sticky Minds, Better Software Magazine. She has co-authored a book on crowdsourced testing . She can be reached at

Related Posts