• ZeroBlockers
  • Posts
  • Creating an Effective Usability Test Task Plan

Creating an Effective Usability Test Task Plan

There is a lot of advice around how to run generative interview sessions. There is even a lot of information on what not to do in usability tests. The Mom Test warns about the risk of asking people what they think of your product. But there is a gap around how you actually create good usability tests. If you need to improve in this area then this guide is for you.

Once we have confirmed that customers actually want a solution to the problem that we are looking to solve, we need to make sure that they want OUR solution. This is where usability testing comes in.

We want to gather critical insights into how users interact with our solution. To do this we will need to create a working prototype or MVP of our solution. But how do we actually run the tests to get the most value and learnings from the sessions. This is where the value of a high quality task plan comes in.

A task plan provides structured scenarios that mirror real-world usage while allowing you to observe natural behavior. This enables you to confidently identify opportunities for improvement and justify your design decisions.

Essential Elements of an Effective Task Plan

1. Task Definition

Your tasks must reflect real-world scenarios that users would naturally encounter. Instead of generic instructions like "create a project," specify contextual scenarios such as "Your client has requested a website redesign project with a three-month timeline. Set up the project and allocate your team resources."

2. Expected Steps

Document the "happy path" - the ideal sequence of actions a user would take to complete the task. This serves as a baseline for comparing actual user behavior and identifying points of friction. However, remember that users might find alternative valid paths that you hadn't considered.

3. Required Materials

List everything needed to conduct the test, including:

  • Prototype or product version

  • Required feature access

  • Test environment setup

  • Any supplementary materials or data

  • Login credentials or user roles

4. Success Criteria

Define clear metrics to measure task completion, such as:

  • Time to complete

  • Error rates

  • Number of attempts

  • Navigation efficiency

  • Successful completion rate

5. Benchmarks

Establish comparison points from similar systems or previous versions to contextualize your findings. This might include industry standards, competitor performance, or your own historical data.

A Real-World Example: ProjectAI Task Plan

Let's create a sample task plan for ProjectAI, a project management tool designed for agencies. It automates resource allocation when project timelines slip and incorporates workflows for client approval of changes.

Sample Task: Managing Project Timeline Slippage

Task Definition: "You've just learned that your design team needs an additional week for the logo development phase. Use ProjectAI to adjust the timeline and get client approval for the change while ensuring your development team's resources are automatically reallocated to maintain project efficiency."

Expected Steps:

  1. Navigate to project timeline view

  2. Locate the logo development phase

  3. Adjust the timeline by one week

  4. Review automated resource reallocation suggestion

  5. Accept or modify the suggested changes

  6. Generate client approval request

  7. Preview and send the change notification

Materials Required:

  • ProjectAI test environment

  • Sample project data

  • Client and team member profiles

  • Resource allocation matrix

  • Timeline visualization tools

Success Criteria:

  • 50 tests

  • Task completion rate: 80% of users complete within 5 minutes

  • Resource reallocation acceptance: 75% of users accept the automated suggestion without modifications

  • Client approval workflow: 85% of users generate a properly formatted request with all required details

  • Error rate tolerance: Maximum of 2 non-critical errors per user session, with no blocking errors that prevent task completion

  • Navigation efficiency: Users should reach key features within 3 clicks 95% of the time

Benchmarks:

  • Manual resource reallocation typically takes 60 minutes

  • Traditional client approval processes average 48 hours

What to Look Out For During Usability Testing

  1. Navigation Choices
    Pay close attention to how users navigate the interface. Are they guessing or confidently moving through menus? Do they utilize breadcrumbs or search functionality?

  2. Decision Points
    Observe where users pause or hesitate. Decision points, such as confirming changes or interpreting resource allocation suggestions, often reveal ambiguities in the design.

  3. Time on Task
    Measure how long users take to complete each task. Excessive time may indicate complex workflows or poorly prioritized UI elements.

Avoiding Common Antipatterns

1. Unclear Tasks

Always pilot your tasks with 1-2 users before formal testing. In our ProjectAI example, ensure users understand terms like "resource reallocation" and "timeline slippage" without needing clarification.

2. Overly Long Tasks

Keep tasks between 5-15 minutes. Break down complex scenarios into smaller, manageable tasks. For instance, separate timeline adjustment and client approval into two distinct tasks if necessary.

3. Transfer Learning Effects

If testing multiple tasks, randomize their order to prevent learning from one task affecting performance on another. This is particularly important when testing different approaches to resource allocation or approval workflows.

4. Demos instead of tasks

In a Water-Scrum-Fall world, many products are not ready to be used by customers until the very end. If you know exactly what you need to build, it is more efficient to build the system component by component. The tradeoff is that until all components are ready the system does not work.

But we’re always building something new in product development so we need to evaluate whether it works before investing too much. We can break up our feature into multiple end-to-end slices that offer increasingly complex functionality. It takes longer to do it this way but you get the benefits of learning from customers and fixing mistakes before it is too late and expensive.

Conclusion

A well-designed task plan is fundamental to obtaining valuable usability insights. By focusing on real-world scenarios, clear success criteria, and careful observation of user behavior, you can uncover genuine usability issues and opportunities for improvement.

Ultimately, a thoughtful task plan is the key to unlocking a better product - one usability test at a time.