QA InfoTech » Blogs » Challenges of Multi Device Testing
Challenges of Multi Device Testing

Challenges of Multi Device Testing

With the growing end user computation options, compatibility testing is becoming a very important, yet challenging area in the last few years. Personal computers, laptops, MACs, tablets, smart phones are all becoming more prevalent; especially, mobile computing options are flooding the market place. Amidst all of this changing hardware and its associated software landscape, there is an increasing need for software applications to be omnipresent in all these options. This is not an easy scenario to handle from a software testing angle though, because of the number of combinations in which the application needs to be verified, in the limited resources and time on hand. I will be talking about the testing challenges that exist today, taking a specific example of testing for Android applications, and also discussing some mitigation strategies. The Android example used is just to help with better explanation; the larger point to remember is how these discussion points apply to your specific compatibility test scenarios and how to customize some of these solutions to help serve your requirements. Also to keep in mind is the scale of testing, as Android alone brings in so many variations to the table.

Problem statement:
The scale of Android market penetration makes it difficult and impractical, if not impossible, to execute all tests across all combinations. The application that runs on an Android device could run on any platform, firmware, phone. If you were to test for all of these, you would be testing on over 100 different phones with 7 Android platform versions (1.5, 1.6, 2.1, 2.2, 2.3, 3.0 and 3.1) and approximately 2 vendor firm-wares. Thus, about 1400 combinations need to be tested, if you need to be foolproof. However, often times it is only a subset of these combinations and tests that yield maximum returns in terms of bugs found. How do you really determine the best set of combinations to test on, maximizing bugs found and minimizing risks, as much as possible?


Solution – To identify the appropriate device factors:
The crucial element to restrict the list of devices is to recognize the factors that might affect the application and make sure to cover all possible combinations of such factors. Dealing with factors instead of devices will empower you to scale down the list of covered devices to a manageable subset that will provide coverage for a large portion of relevant scenarios.

Listed below are some such factors to keep in mind in determining the optimization matrix:
Screen and its size: Smart phones or tablet applications are typically developed not for GUI (Graphical User Interface) but for NUI (Natural User Interface), which involves multiple touch gestures such as scrolling, tap to click, shaking, pinching and providing other finger-touch inputs. The main concern is the screen size of the devices, on which the application will run. One of the challenges of framing an application is how to frame a view layout so it will furnish the screen size. The screen size can differ from QVGA (240×320), WVGA (480×800), HVGA (320×480), FWVGA (480×854), in tablets (1024×600) and standard for tablets (1280×800). So one of the challenges is to run the same application on 240×320 and also on 1280×800

Android OS versions: The android platform is rapidly evolving. The diversification from version 1.5 to version 3.1 is huge. There are lots of features that can affect the application under test

Mobile processor: Mobile devices are very sensitive to processing power. Devices could run at single core at 600 MHz or dual core at 1200 MHz

Strategies to mitigate the risk factors:

Edge: Select those factors that are at the edges of the scale, such as minimum and maximum screen size, OS versions at the lowest and highest end with minimum and maximum CPU power.
We can consider 2 devices when we combine influencing factors:
Lower scale: HTC Hero: Android version 1.5, HVGA (320×480) screen size and 528 MHz Qualcomm CPU
Higher scale: Galaxy Tab 10.1v: Android version 3.1, 1280×800 screen size and 1000MHz dual core

Commonality: Keep tab on market device penetration and usage patterns to select the most popular devices. Such cases alone will give you the most bang for the buck and will help make your prioritization task easier. Sites such as http://marketshare.hitslink.com/ help determine the market penetration rates
Simultaneous execution of test cases: Once a matrix has been prod3ized, as much as possible have testers run the same test across combinations simultaneously. This not only helps knock off the test case at one shot, but also helps the tester evaluate and compare application performance across variables at the same time; this often helps to get better feedback than if the tests were run at separate times, and also makes the test setup process more efficient

Testing across combinations: Encourage testers to test across combinations, even when they are using the application in an ad-hoc fashion. Encourage cross discipline testers, who may not be directly working on your project, especially when you are testing a consumer application. The more the coverage you get through such formal and informal methods, the better are your chances of eliciting realistic end user feedback before release

With years to come, computing options are only going to increase. You will need to devise a strategy that works the best for your set of constraints and revisit it with every release to ensure it will continue to help you find relevant and the most important defects in the shortest possible time. The list above is not an exhaustive set, but an attempt to explain the gravity of the problem at hand and some of the core solutions that can immediately come in handy. Some of the core thoughts in this blog are from Chandni Sharma, one of our QA engineers, which I have expanded on.

About the Author

Avatar QA InfoTech
Established in 2003, with less than five testing experts, QA InfoTech has grown leaps and bounds with three QA Centers of Excellence globally; two of which are located in the hub of IT activity in India, Noida, and the other, our affiliate QA InfoTech Inc Michigan USA. In 2010 and 2011, QA InfoTech has been ranked in the top 100 places to work for in India.