Ranking Continuous Research Methodologies

Not all research methods are equally suited for continuous research, where the focus is on trying to get high-quality signals as quickly and cheaply as possible. This article presents a framework for evaluating and ranking continuous research methodologies based on their effectiveness in understanding what users do, why they do it, and how flexible the method is for ongoing use.

To effectively rank different research methodologies, we must first establish clear criteria for each dimension of evaluation. We are going to look at how some common methods work across four different dimensions:

  1. Time and Cost. We need to prioritise quick and cheap research methods 

  2. Quantitative strength. How accurately the research helps us to quantify real customer actions

  3. Qualitative strength. How accurately the research helps us to understand the motivations behind actions

  4. Flexibility. How easily we can pivot and adapt research methods to focus in on areas of interest.

Judging Qualitative and Quantitative Strength

Time and Cost are unambiguous metrics; it’s easy to compare different methods. But something like quality is harder to quantify and rank. The key to ranking methods is to understand the number one rule of research: people lie!

People don’t intentionally lie or try to mislead but when asked about behaviours people will often respond with how they would like to behave rather than how they actually behave. Also, people are really bad at understanding the reasons why they take certain actions. If asked, they will be able to provide an answer. They only problem is that it is unlikely to be true. Given this reality we can construct four levels of increasing quality in understanding customer behaviours.

  1. Opinions and Predictions
    These have very poor predictive accuracy of real behaviour.

    1. "How important is data security to you?"

    2. "Would you recommend this product to others?"

    3. "What features do you think would make this product better?"

  2. Self-Reported Behaviours
    Human memory is very unreliable and prone to availability bias, frequency illusion and more.

    1. "What steps do you take when onboarding a new team member?"

    2. "When was the last time you encountered this error?"

    3. "How many times did you use this feature last week?"

  3. Contextual Research
    Demonstrate or simulate the real behaviour

    1. "Could you show me where you keep your team documentation?"

    2. "I see you've created several custom templates - could you explain how these fit into your workflow?"

    3. "Can you show me your current project folder structure?"

  4. Direct Observation
    Watching the person perform the action in a real environment. This often does not involve questions we want to avoid observer bias, where being observed changes the persons’ behaviour. The best approach is just to try to blend into the background and watch as the person goes about their job or task.

Now that we have a way of comparing the quality of different methods we can complete our ranking process.

Ranking Methods

There are hundreds of different research methods available but we have chosen some of the most popular primary research methods for this comparison.

Method

Time and Cost

Quantitative Strength

Qualitative Strength

Flexibility

Product Usage

Low. Quick and cheap

High. Shows us real behaviours

Low. No rationale behind actions

Medium. Takes time to set up new metrics.

Field Studies

Medium. Takes time to travel and observe

High. Shows us real behaviours

High. Can determine motivations

Medium. Most observing

Diary Studies

Medium. Takes time to organise, train and follow up

Medium. Relies on self-reported behaviour

Medium. Relies on self-reported behaviour

Medium. Difficult to change conditions

Customer Interviews

Medium. Can be quick and cheap

Medium. Relies on self-reported behaviour

High. Can determine motivations

High. Can follow interesting answers

Customer Advisory Board

Low. Recurring meetings once set up

Low. Risk of group think

Medium. Often self-reported behaviour

Low. Structured agenda

Surveys

Low. Quick and cheap

Medium. Relies on self-reported behaviour

Low. Self-reported and often short answers

Low. Cannot change questions

Each method has different trade offs with no “perfect” method. Therefore we need to mix and match the methods that we choose.

Given that we know we need to deliver results quickly and cheaply we can look at the best performing methods in the cost category first: Product Usage, Surveys and Customer Advisory Boards. Comparing these methods we can see that Product Usage deliver high-quality quantitative data whereas the others return poor qualitative and quantitative results.

Understanding the “why” behind customer actions is critical in order to anticipate how they might react to a new feature or product change. Ranking the methods by their qualitative value gives us Field Studies and Customer Interviews. Field Studies allow you to observe real behaviour in an authentic environment so it should be preferred when possible. But the reality is that for a lot of people conducting field visits is not feasible, in which case interviews which focus on contextual questions can deliver interesting insights.

Conclusion

The challenge in planning your research methods is not simply choosing the most trustworthy method, but in finding the optimal balance between trustworthiness, cost, and practical constraints. Combining product usage analytics with customer interviews or field studies enables teams to understand both the real customer behaviours as well as the reasons behind them.

Every situation is different though, so if you cannot get access to customers for interviews or field visits then you may need to compromise and explore alternative approaches. The key thing to keep in mind is that different methods have different trade offs in terms of quality and flexibility.

Whatever methods you choose, the important thing is that you perform some research to gain a better understanding of your customers so you can build products based on insights rather than guesses.