Process

User Testing with People with Disabilities: A Practical Guide

By EZUD Published · Updated

User Testing with People with Disabilities: A Practical Guide

Automated tools and expert audits are necessary but insufficient. The most reliable way to understand whether a product works for people with disabilities is to test it with them. Usability testing with disabled participants reveals barriers that no scanner can detect: confusing screen reader announcements, disorienting focus management, cognitive overload from complex workflows, and interaction patterns that conflict with assistive technology strategies.

Why This Testing Matters

Deque’s research consistently shows that automated tools catch only 30-40% of accessibility issues. The remaining 60-70% require human judgment. Expert auditors catch many of these, but they evaluate against standards, not lived experience. A person who uses a screen reader eight hours a day navigates differently from an auditor running NVDA for a testing session. Their strategies, expectations, and frustrations are qualitatively different and deeply informative.

Recruiting Participants

Recruitment is the most common barrier teams cite. Several approaches work:

  • Disability organizations and advocacy groups. Organizations like the National Federation of the Blind, the Hearing Loss Association of America, and local independent living centers can connect you with potential participants.
  • Accessibility-focused recruitment panels. Companies like Fable and AccessWorks (operated by Knowbility) maintain panels of people with disabilities who are experienced in usability testing.
  • University disability services offices. Students with disabilities are often willing to participate in usability studies.
  • Internal employees with disabilities. If your organization has employee resource groups for people with disabilities, they may be interested in contributing.

Compensation

Pay participants fairly. Disabled people’s time and expertise are not free. Pay at least the same rate you would pay any usability test participant, and consider a higher rate given the specialized knowledge they provide. Common rates range from $75 to $200 per session depending on length and complexity.

Representation

Aim for diversity within your participant group. Disability is not monolithic. Include people who:

  • Use screen readers (JAWS, NVDA, VoiceOver, TalkBack)
  • Use screen magnification or zoom
  • Navigate by keyboard only
  • Use switch access or alternative input devices
  • Use voice control (Dragon, Voice Control on macOS)
  • Have cognitive or learning disabilities
  • Are deaf or hard of hearing

A study with five to eight participants across different disability types and assistive technologies typically surfaces the most significant barriers.

Planning the Sessions

Environment

  • Remote testing is often preferred. It allows participants to use their own equipment, configured with their own assistive technology settings. Tools like Zoom with screen sharing and recording work well.
  • In-person testing requires verifying physical accessibility of the testing space: ramp access, accessible restrooms, appropriate lighting, and quiet conditions.
  • Ask participants in advance about any accommodations they need.

Tasks

Design tasks that mirror real usage scenarios. Avoid overly prescriptive instructions that bypass the navigation challenges you want to observe. For example, “Find and purchase a blue widget” is more natural than “Click the Products menu, then click Widgets, then click Blue Widget.”

Keep sessions to 60-90 minutes to avoid fatigue. Some participants may need more breaks or shorter sessions.

Facilitation

  • Use person-first or identity-first language based on the participant’s preference. When in doubt, ask.
  • Do not touch a participant’s assistive technology without permission.
  • If a participant gets stuck, observe before intervening. The struggle itself is data.
  • Record sessions (with consent) for team review. Screen recordings with audio capture assistive technology output.

For more on facilitation in inclusive settings, see inclusive design workshop facilitation and disability etiquette in the workplace.

Analyzing and Reporting Findings

  • Map each observed barrier to the relevant WCAG 2.2 success criterion where applicable.
  • Distinguish between accessibility defects (violates WCAG) and usability issues (technically accessible but practically difficult).
  • Include direct participant quotes (with consent) to make findings concrete for development teams.
  • Prioritize findings by severity: task-blocking issues first, then degraded experience, then minor friction.

Feed findings into your accessibility bug triage process and remediation plan.

Common Pitfalls

  • Testing too late. If the product is nearly shipped, teams resist making changes. Test during design and development when changes are cheaper. This is the shift-left principle in action.
  • Treating it as a one-time event. Accessibility usability testing should recur, especially after major redesigns or new feature launches.
  • Conflating simulation with testing. Wearing a blindfold does not replicate the experience of a blind screen reader user. There is no substitute for testing with actual disabled users. See our article on disability simulation training ethics.

Key Takeaways

  • Automated tools and expert audits cannot replace testing with people who use assistive technology daily.
  • Recruit diverse participants across disability types and compensate them fairly.
  • Remote testing on participants’ own devices and configurations yields the most realistic results.
  • Map findings to WCAG criteria, prioritize by severity, and feed results into remediation plans.
  • Test early, test repeatedly, and never substitute simulation for real user feedback.

Sources