Skip to content

Usability testing process

Jacline Contrino edited this page Dec 19, 2024 · 5 revisions

Introduction

The U.S. Web Design System (USWDS) is a toolkit that helps U.S. government agencies build accessible, user-friendly websites. It provides code and guidance for components (like buttons, form fields, etc.), patterns, and other elements that web teams can use.

We audit the accessibility of our components, as well as doing usability testing with people who have disabilities. Observing actual use of our components helps us understand how usable and accessible our design system is and what improvements we need to make. Here’s our process:

End-to-end process for accessibility-first usability testing

When usability testing, we ask representative participants to complete a few tasks on a prototype, live website, or app. We test up to six components in one round of usability testing sessions, and usually review that many within any one test.

Our process is informed by best practices in usability testing with participants with disabilities as well as our experience from past usability tests (see ‘Buffalo’ and ‘Zebra’ component batch testing).

Testing cadence

We aim for one round of usability testing per quarter for either:

  • A new batch of components
  • Validation testing for previously tested components

Our process:

  1. Determine a need and priority to test either:

    • An existing component or pattern (as part of an accessibility audit or due to known problems)
    • A new component or pattern not yet in our design system
  2. Establish a baseline level of accessibility. Before usability testing, core USWDS team members perform quality assurance (QA) and manual accessibility tests. These initial tests (color contrast, alternative text, etc.) are done in-house to catch basic accessibility issues.

    Automated accessibility scans we might do:

    Manual accessibility testing we do:

    • Keyboard navigation
    • Screen readers available to the team — currently NVDA, VoiceOver, Narrator, JAWS
  3. Create a research plan. We determine what we are trying to learn, why, and everything we might need to do this usability testing. Including: *** Participants:** We decide who needs to be recruited depending on our research questions. We typically need 8-12 participants for a usability test (note: if it is a majority-qualitative test without rigid structure, PRA clearance is NOT needed), plus optional 1-2 for the pilot tests (these are preliminary tests to troubleshoot any issues before the real tests). We try to recruit at least 2-3 people for each type of assistive tech used or disability represented that we’re focusing on in that round (for example, 3 screen reader users, 3 voice command users, 2 neurodiverse participants, etc.).

  • Recruiting: We recruit participants through partner organizations that serve people with disabilities, and draw from the USWDS research public participant panel we started building in early 2024 (see our recruitment outreach procedure). We sometimes also recruit using GSA’s Voice of the Public program.
  • Compensation: We compensate participants per hour and may increase compensation depending on the length and complexity of the test.
    • Standard testing is 5 per hour
    • Screen reader session participants are typically compensated 00 or more since tests often take longer.
  • Session length: Our sessions typically take anywhere from 30-90 minutes.
    • Plan for 10-15 minutes at the beginning of the usability testing sessions to do ‘tech checks,’ go through the consent form if they couldn’t complete it ahead of time, and help the participant feel more at ease by starting conversationally.
  • What/how much we test: We intend to test 5-6 components in a single test and round of testing, but the number can depend on the goals of the study and complexity of what we’re testing.
  1. Share the research plan with the core team, Product Lead, and Experience Lead so we can finalize our goals and approach to testing. Once we agree, we move forward with recruiting the participants for usability testing.

  2. Select participants. If we need to recruit outside of our panel, we proactively reach out to community organizations.

  3. Build prototypes. The researcher creates mockups of the prototype(s) needed for testing (currently in Mural), communicates requirements with core team developers, and collaborates with the team to refine. Prototypes are hosted in our development sandbox: you can see an example prototype we used in our “zebra” batch component testing.

  4. Schedule sessions and send the participation agreement. The researcher determines several days and timeframes available for testing sessions — both the pilot test and the real tests — and emails participants to schedule, keeping in mind different time zones, as well as sending a participation agreement as an online form so participants have time to read and fill it out before the session. The participation agreement includes information about the study’s goals, lists details about the specific testing session, and works as informed consent to participate. See an example participation agreement from 18F.

  5. Do a pilot test. The researcher runs at least one pilot test to troubleshoot any issues that might come up in a real test, such as issues with technology, considerations for logistics, facilitation, etc.

  6. Resolve any issues that came up in the pilot test. Once all issues are resolved, we move forward with the usability testing study.

  7. Prepare before the day of the session. Things we keep in mind:

  • Understand how to use teleconferencing software while using a screen reader, to help participants if needed. The VA has an excellent resource for planning testing that involves assistive devices.
  • Invite an appropriate set of note-takers and observers:
    • Always have at least one Accessibility Specialist present as a note-taker or observer.
    • Limit to no more than two observers. While we want to include the team in observing testing to promote empathy and understanding, we need to create a balance with participants’ comfort. It may be intimidating to have extra observers on the call. Consider who needs to observe the session live and who can simply watch the recording afterwards.
  • Protect participants:
    • Request permission to record in the participant agreement form AND verbally at the start of the session. Once the researcher starts the recording, they say “thank you for allowing us to record this session.”
    • Provide fake user data where needed in the prototype.
    • If needed, ask the participant to disable autofill in their browser’s settings.
    • Protect personally identifiable information (PII) in recordings.
      • If it's ok for the participant to keep their camera off for the session, let them know at the beginning of the session.
      • In your teleconferencing software, use available settings to protect PII as much as possible. For example, set it to not display participants names, choose recording file options that only show the participant’s shared screen but not their face, and so on.
  1. Send a reminder to participants to confirm their attendance. We send a reminder a day before the test using the contact information they noted works best for them.

  2. Conduct the testing sessions. Follow the detailed steps in our testing day checklist.

  3. Analyze and synthesize the results of the test. Share key insights and recommendations with the team and other stakeholders.

  4. Create GitHub issues from the findings. Write issues so they’re both useful to the community (since they are linked to from the “known issues” section of the component page) and the USWDS core development team.

  5. If we find accessibility or usability problems with components, we create issues in the USWDS-Site GitHub repo to add that information to the relevant pages on the USWDS website. The researcher then assigns them to an appropriate developer on the team.

  6. Share this research with the community and public. Publish these reports in the USWDS GitHub wiki and link to it from our USWDS Research page. We also link to the individual GitHub issues on individual component pages as “Known issues” where appropriate.

  7. Track progress on issues. The researcher follows up with the development team on the issues/solutions and consult with them as needed.

  8. Make improvements and test again. Set up another round of testing and start the process again.

This process (up to step 15) often takes 2-3 sprints.

Clone this wiki locally