Compatibility is nothing new to us as software testers. It has, however, evolved significantly over time. Traditionally it has been a part of the test strategy, testing for applications across various platforms. Important platforms of very high use, would take a larger share of the tests, while the not so important platforms would be smoke tested on. This also gave a lot of opportunity to take up optimization depending on number of test scenarios, and other constraints such as time and budget on hand, compared to the value of running the tests across varied platforms.
As mobile devices came into the picture, compatibility testing took a slightly different flavour to combine tests across varied devices, looking for ways to optimize around devices of same screen size, resolution, devices with same OS etc. Slowly compatibility testing was not just about the application’s functionality and UI, but also about other quality attributes especially performance and security.
In today’s scenario, compatibility has grown leaps and bounds, and is starting to take a very large share of a quality effort. I was just reading this morning about how Amazon’s Alexa offerings are increasingly becoming comprehensive to cover a range of scenarios in the world of IoT (Internet of Things), which is when I thought I should write this post about compatibility. Imagine the number of scenarios to cover within and outside of Amazon’s offering to ensure devices and software within the entire ecosystem work fine. Software and hardware vendors are increasingly becoming more transparent and open in ensuring cross compatibility scenarios are fully verified for in ensuring a compelling experience for end users. Compatibility will make all the difference in ensuring newer technologies such as IoT are accepted in the marketplace and for this testers need to think of compatibility in close conjunction with all the other quality attributes in defining a robust test strategy.