- ZeroBlockers
- Posts
- How to Moderate a Usability Test: A Comprehensive Guide
How to Moderate a Usability Test: A Comprehensive Guide
When planning usability tests, one of the first critical decisions is choosing between moderated and unmoderated sessions. While unmoderated testing offers cost efficiency and scalability, it often falls short in providing deep qualitative insights. Moderated sessions, particularly valuable for early-stage designs and complex systems, allow teams to uncover rich, nuanced feedback that can prove even more valuable than quantitative data. In this article, we go into detail on how to effectively moderate a usability test.
After defining assumptions, creating an experiment plan, and crafting usability test plans, it's time to run the sessions. This stage requires careful moderation to ensure the insights are unbiased, actionable, and valuable.
1. Setting the Stage: Pre-Test Questionnaires
Pre-test questionnaires serve multiple purposes: they help qualify participants into appropriate cohorts (such as advanced users and novices), establish baseline attitudes toward the product, and gather initial impressions about usability and terminology.
Key areas to explore in these questionnaires include:
Previous experience with similar products
Frequency of relevant tasks or activities
Comfort level with technology
Initial impressions of the product's visual appeal
Understanding of key terminology
A pre-test questionnaire should not be too long or detailed. Your core learning will come from the participants’ actions, not their answers to the questionnaire. Keep it short and focused on the most important opinion-based and experience-based questions.
2. Starting on the Right Foot: A Clear Orientation Script
Before diving into the test, an orientation script sets expectations and ensures participants feel at ease. You could rely on some notes, but a script ensures all necessary points are covered while also minimising the risk of unintentionally biasing participants by leading them in a specific direction. This can easily happen as you progress through your tests and start to see patterns emerge.
A well-crafted orientation script ensures participants understand what to expect and how to contribute effectively. Your script should cover three key areas:
First, set clear expectations about the session's purpose and structure. Explain that you're testing the product, not the participant, and that their honest feedback helps improve the user experience.
Second, introduce the think-aloud protocol, where you want people to verbalise their thoughts as they complete tasks. This is going to feel a bit awkward for some participants, so it can be helpful to provide an example of what you're looking for. "I'm looking for the menu button - ah there it is in the corner" or "I'm not sure what to do next, I'm going to click here and see what happens". Our goal is not just to see whether people act as we expect, but to understand why they do what they do.
Third, emphasise that participants cannot offend anyone with their feedback. Make it clear that candid, honest responses are exactly what you need, even if they seem negative. There is no right or wrong answer, and you're not looking for them to succeed or fail. You're looking to understand how they interact with the product.
3. Running the Test: The Art of Moderation
To start the test you should have the tasks printed out on separate cards. You will ask the participant to complete each task in turn, and you will observe and take notes. Up until this point, everything is the same as an unmoderated test. The difference now is that we have a moderator in the room and this will change the dynamic of the test.
Successful moderation requires a delicate balance. You need to guide participants through the test without leading them, remain impartial, and know when to probe and when to observe silently. The key responsibilities of the role include:
Build rapport: It is uncomfortable to do anything with someone looking over your shoulder. The more relaxed the participant can be, the better the test. Start with casual conversations to put the person at ease.
Maintain impartiality: Tell the participant that you had no hand in the design of the product even if you did. If you’ve built up even a little rapport, the person is going to downplay problems in the system to try not to offend you (see The Mom Test for more on this risk). Be aware of your voice and body language, particularly when a person makes a mistake. The worst thing you can do is make someone feel self-conscious or stupid during the test.
Know when to probe: When people make unexpected actions you will want to understand why they have made those choices. But you need to balance the disruption to the test flow with the need to uncover insights. In early stage tests, where you are working with paper or low fidelity prototypes you are less likely to be concerned with flow so it is acceptable to ask probing questions. However during later stage tests with interactive prototypes of MVPs it is best to wait until after the test to ask your probing questions.
Do not rescue participants: If people start going down the wrong path, let them. This is a test of the product not the person. If they are struggling to complete a task, it is likely that other users will too. Silence can be awkward but, since participants should be thinking aloud, it signals confusion or deep thought. Use it as a cue to observe behaviours rather than filling the gap with commentary. You can encourage the person to keep trying if they are getting frustrated but the only time you should step in is if the person is at the point of leaving the test.
4. Post-Test Questionnaires: Uncovering Attitude Shifts and Hidden Insights
Post-test questionnaires, using similar questions as in the pre-test questionnaire, allows you to measure how participant attitudes shift after interacting with the product, and they often surface fascinating contradictions between how people rate their experience versus what was observed in the session.
For instance, a participant who struggled with navigation might still rate the system as "very intuitive" in their post-test responses. You could use this apparent conflict to probe further during your debrief: "I noticed you rated the navigation as intuitive – could you tell me more about that, particularly in relation to the task where you were searching for account settings?"
Finally, the post-test questionnaire also provides one other benefit: use the time while participants are filling out the questionnaire to review your session notes and prepare thoughtful follow-up questions based on your observations. This preparation ensures your debrief discussion will be more focused and productive.
5. Conducting Effective Debriefs
The post-test debrief is where you can dig into some of the unexpected actions that people performed. Start with open-ended questions like "How did that go for you?" before diving into specific areas of interest. The goal is to understand why people behaved the way they did: “I noticed you paused here—what were you thinking?”
If observers are present, establish a system for them to submit questions through the moderator. Other people will observe different things based on their backgrounds so having a way to ask questions, without the participant facing a panel of people, ensures that you can uncover all the insights.
When available, showing alternative designs during the debrief can yield valuable comparative feedback. If there was an area where the person struggled, asking the participant to evaluate different approaches and explain their preferences can be useful as a signal. You need to be careful not to take these insights too much at face value - they are opinions, not real behaviours.
6. Capturing and Organising Insights
Immediately after each session, conduct a debrief with observers to capture fresh insights while they're top of mind. Structure this discussion to start with general impressions before analysing each task in detail and make sure that you document all of the insights captured.
One of the biggest temptations is to start to implement fixes based on the feedback from a single usability session but instead you should look for patterns across multiple sessions. This is where the real insights lie (and it helps you to prioritise your limited design time).
Conclusion
Remember that effective moderation is both an art and a science. While following these guidelines provides a strong foundation, each session will present unique challenges and opportunities. Stay flexible, maintain a neutral stance, and focus on gathering authentic user insights that can drive meaningful product improvements.
By following these moderation principles and techniques, you can ensure that your usability tests are productive, insightful, and actionable.