I don’t take any credit for this content, other than editing it down to a digestible list of sorts. I like the work of karlgroves.com so much, I thought I’d summarise it and post it here. More detail on this topic, including my own schematic of an agile-accessibility process, can be found in Part 1 here.
Do Automatic Testing First
- First round of testing should just be automatic testing.
- A report should contain guidance on how to fix the issues found (audit) and issues assigned to relevant roles.
- Development team should fix those problems in a sprint.
- Do a regression audit that includes both automated testing and manual testing at project milestones or final sprint.
- Never pay a human to find errors that can be found through automated testing.
- Manual testing will close the gaps on what automated testing couldn’t find.
- Taking this iterative approach, you’ll become compliant faster and cheaper.
Suggestion: Build automatic accessibility testing into your Definition of Done
- Accessibility is a bit of a different situation to Scrum environment (where developers test their own work) primarily because the conformance criteria is often so subjective.
- There are, however, a large and important subset of accessibility best practices which can be tested for automatically.
- Developers in Agile environments should subject their code to these tests prior to calling a task complete.
- QA engineers in Agile shops should never find any automatically-testable error because the developers should have taken care of that stuff. If they are, then the User Story isn’t complete.
Content teams: test for accessibility before publishing
- These guys are not developers and often know the least about web accessibility.
- As a result of this – and the amount of content they create – content creators can be the source of a significant volume of accessibility errors on a site.
- The workflow of the content creators should include automatic testing to ensure no errors reside in the new content they’re about to publish.
- The tests should be limited to testing the content only.
Do definitive accessibility tests only
- In the above scenarios, you should configure your automatic testing tool so that the only things it tests for are those things which can be definitively determined to be pass/fail.
- In any given tool, some of the test results will come up with items they flag as “Warnings” or “Manual Verification”. Figure out how to turn these tests off. These “warning” level errors are often incorrect or require too much subjective interpretation to be an efficient use of time.
Not finished yet…
- The thing to keep in mind when doing automatic testing is that you are not done.
- If you’re getting clean reports from whatever automatic tool you use, great. Pat yourselves on the back, because you’re doing better than the vast majority of websites out there. Regardless, you’re still not done!
- Even the best automatic testing tool provides incomplete coverage.
- Function under the understanding that more work is to be done before you can really claim your site is accessible.
- Specifically, you need to include steps in your process to include manual code review, assistive technology testing, and use case testing at various stages of the development process.
Iterate & expand scope
- One of the biggest barriers to adoption of accessibility is the impression that it is nebulous and intrusive.
- Using the approaches outlined, it’s possible to build processes into the workflow that allow accessibility to have minimal impact on your business.
- By initially testing for a subset of high impact issues, get quick wins that help minimize the pain experienced when an organization is new to accessibility.
- Then build on those successes by including a few more of the more subjective things and / or including manual testing.
- Increasing the scope gradually and deliberately will help minimize the perceived impact.