Testing for varied platforms, operating systems and browsers isn’t anything new in software testing. Compatibility testing especially for OS browser compatibility in ensuring the application under test works and renders well on a supported matrix has traditionally been a very important piece of the test scope. Often inputs from business is taken too to finalize the supported matrix including the new areas of upcoming support and the backward compatibility to be provided for. While this was limited to a handful, in the days of pre-mobile testing, to just the operating systems and browsers, today, this is a critical piece in mobile application testing.
While operating system and browser options are not many and still very manageable, the mind boggling growth in devices has made mobile testing huge in scope.
Newer devices are flooding the market by the day, with a range of screen size options, processing speeds, selections across mobile hand helds, tablets etc., let alone other portable devices such as wearables. Mobile application testing herein cannot be a static effort. While a regular functional tester mainly focuses on core scenarios to test, a mobile functional tester, takes in the other important piece of upgrading the supported device list every few months to ensure the test effort is reflective of the latest trending devices and usage patterns. Often times, the track is not around the latest devices, but it is on finding the usage patterns to ensure a representative mobile app testing effort.
A lot of subscriptions models such as Sauce Labs, Browser Stack are making testing across a wide range of devices possible today without having to invest in expensive devices. Similarly mobile simulators and emulators are also a good bet in specific cases. At QA InfoTech we also have an internal pool of devices, forming our own mobile lab, in addition to a crowd model where for specific scenarios that require not so common devices, our employees are able to rent their devices for a period of time. Typically, iOS device upgrades are minimal to once or so a year but Android is a large market to keep tabs on. In most of our projects we upgrade Android devices two to three times a year while Apple devices are once a year. This pattern has worked well so far – but it is not about just the frequency of the update. What factors are taken into account in deciding what are the next set of devices to use, is very important, including usage patterns at the given time.
Mobile Testing requires a lot of user centric extempore thinking. How users would use the devices, their mindset, what other actions may happen while the app under test is in use, how overall performance and security fare, the hardware/software integration amongst others. Herein, the choice of the matrix to test, which ones to take up full testing on, which ones to use for a sanity test, together all decide how well the app would fare. Granted there is no right or wrong answer here, as the proof of the pudding is in the app’s market acceptance, for which understanding the pulse of the user through usage patterns and evaluating the app’s performance against those usage patterns becomes key. As more devices enter the market, this will only continue to intensify, making device matrix a central piece to the success of any mobile app testing effort.