Web Accessibility to promote usage by one and all, including people with disabilities, is no longer a nice-to-take-on task. It is being increasingly demanded for by end users and mandated by universal standards and governmental regulations. Testing for web accessibility is a growing area gaining momentum in the non-functional testing bucket. Given the number of web standards to adopt and testing tools that are available, is there a single wholesome strategy to adopt to ensure an application’s readiness for its end users from an accessibility angle?
Software testers are increasingly moving upstream to collaborate and work with the developers across releases. While this holds good for a lot of test areas, one test area that is now playing catch up is accessibility. A lot of organizations that have so far not thought about accessibility are now looking at it either voluntarily or under the mandate enforced by compliances such as Sec 508, DDA. It is always better late than never – so certainly a good move forward by organizations, but this is one area that will now take us back to the traditional style of bottoms-up software testing. Since in most cases, we are working on legacy software that is already in production, what are the strategies that would work well here?
Have the accessibility team start with the most critical applications and evaluate them in line with mandates across a range of disabilities – Today, there are ample browser built in toolbars, assistive technologies, checklists such as VPAT that are available in addition to manual reviews. There are also tools that run through the entire application to spurt out accessibility specific issues. The tester could start with such sources and then work his way up to picking the most relevant ones taking into account the cost and time constraints on hand.
In addition, a crowd team of real end users with disabilities can be brought in for a bug bash to report their feedback too. Some may not be defects per se, and could be suggestions for a better accessible application. Such suggestions may not necessarily be feasible in the legacy application given the complexity of the fix, but may be great inputs for future applications that are built.
The feedback from live issues reported is also great place to start. Whatever the source of information be, today when we look at accessibility in legacy products it is certainly a bottoms up approach. With the right focus expended to this area, hopefully the scale will soon tilt where we can move from bottoms-up to a top-down approach for accessibility too, taking all these current learnings into the future products that are built.
Accessibility is an important area in non-functional engineering today. Software development efforts have acknowledged this and started accommodating accessibility in design, implementation, testing and marketing efforts. However, what about what has already been implemented? There is so much back log that exists, that even enforcing standards and mandates is not going to be any quick or easy in embracing accessibility. Section 508 herein continues to be a great go to resource be it for designers, developers or testers. Checklists such as VPAT that are built off of Section 508, have been greatly helping testers in their accessibility testing efforts. Despite all of this awareness and attempt to implement, the gap between where we ideally need to get it and the where we are today is huge. Why is it so?
Organizations are engineering products within very short time and cost budgets. There is no room to look at backlogs. Accessibility issues that need to now be incorporated retro-actively, are not small fixes. They often need design level changes. Certain organizations such as Amazon, have engineering a completely new solution solely focused on accessibility asking screen reader users to navigate to a separate site. While this is a thoughtful solution to include them, the question is whether this is a very scalable solution. It becomes a duplicate effort to maintain two versions of the site – let alone the time and cost in maintaining it and the number of discrepancies that may potentially arise between the two versions. Also, it is not very clear whether this includes just the screen reader users or if it would accommodate other disabled users too.
To ensure this gap between the ideal and real world does not expand further, it is very important for organizations to take a conscious call today – a call to define their priorities, a call where they understand that the situation is not going to change overnight, a call where what they choose to implement is done both for newer engineering efforts and critical past efforts that are still end user facing, a call to put together a focus group to implement this plan and bridge the gap. This focus group will have a very important charter to continue to dynamically help the engineering teams come up to speed on accessibility and will incorporate the latest and greatest in the field, until they reach the required level of maturity to operate independently in building accessible applications. To be able to accomplish this, mandates such as Section 508 are a great start, regardless of the size, scale, domain and technology that you organization fits into.
Functional testing is a core piece of a software engineering effort. With the advent of mobile and social computing, non- functional testing has also gained prominence in recent times, but at the very core, functional testing still remains one of the top goals of a testing effort in ensuring a quality release to end users. If one were to closely look at what constitutes functional testing, it is not very difficult to point out that it covers both the positive (happy path testing) and the negative (error scenario testing) testing routes. The real question is, given the time constraints we often work within, are both these arms adequately tested for. While the positive testing scenarios are often more straightforward, the negative scenarios need a lot of creativity and simulation. As opposed to merely executing all possible workflows, herein the tester is simulating varied conditions that would trigger errors so as to verify a) if the right errors were thrown out and b) if the user is given the right set of opportunities to recover gracefully from an error. The other angle to consider here, which is slightly more subjective is the emotions of the user, which are often at highs, when negative scenarios/failure conditions are encountered. As a tester, there is thus both an objective negative scenario testing and a subjective user emotional state testing that is involved in this area.
One may often limit negative testing to be purely scenario driven. While this is not fully untrue, negative data generation is also very important while test planning, case generation and execution. A simple scenario can be manipulated to become positive or negative based on the data used – thus testers have to keep in mind that negative testing does not always mean a separate set of scenarios that need to be used. For instance, say the user has to enter input for a login field – instead of my name *Rajini*, I decide to enter, *R12$ini*. This is negative testing that is data driven as opposed to scenario driven. Both types of these negative tests are important to ensure comprehensive functional testing coverage. Besides core functional testing, other places where such negative data is useful include localization functional, database functional, security and accessibility testing attributes.
Negative path testing is not something new for us as testers. However, with all the new testing techniques and tools coming into the picture, it is important for us to refresh some of these basic concepts at the grass root level so we arrive at the right blend of testing practices to bring in comprehensive test coverage and for which, negative path testing is a very important component.
- Accessibility testing
- Automation Testing
- Banking Application Testing
- Cloud Testing
- Compatibility Testing
- E-Learning testing
- Functional Testing
- Healthcare App Testing
- Mobile Testing
- News & Updates
- Performance Testing
- Security Testing
- Software Testing
- Translation & Globalization Testing
- Usability Testing
- Case Studies
- White Paper
- Effective Regression Testing Strategy for Localized Applications March 23, 2015
- Moving from a commercial to an open source performance testing tool August 12, 2015
- An Increased Appreciation for Software Localization October 8, 2014
- What is that one compelling reason for crowd sourced testing? December 24, 2014
- Nuances of education technology testing February 21, 2016
- #AccessibilityTesting - 7
- #PerformanceTesting - 6
- Jobs - 6
- Hiring - 6
- accessibility-testing - 5
- accessibility - 4
- performance testing - 4
- #PerformanceTestingServices - 4
- mobile - 3
- testing - 3
- functional testing - 3
- agile cycle - 3
- mobile app testing - 3
- automation testing - 3
- DevOps - 3
- performance - 3
- software testing services - 3
- data analytics - 3
- #SoftwareTesting - 3
- #TestAutomation - 3
- #AppSecurity - 3
- #SecureBankingApps - 3
- #TestingBankingApplications - 3
- #SoftwareTestingStrategy - 3
- #SoftwareTestAutomation - 3
About QA InfoTech
QA InfoTech is a CMMi Level III and ISO 9001: 2015, ISO 20000-1:2011, ISO 27001:2013 certified company. We are one of the reputed outsourced QA testing vendors with years of expertise helping clients across the globe. We have been ranked amongst the 100 Best Companies to work for in 2010 and 2011 & 50 Best Companies to work for in 2012 , Top 50 Best IT & IT-BMP organizations to work for in India in 2014, Best Companies to work for in IT & ITeS 2016 and a certified Great Place to Work in 2017-18. These are studies conducted by the Great Place to Work® Institute. View More
Get in Touch