- ZeroBlockers
- Posts
- How to Create an Experiment Plan (And Why)
How to Create an Experiment Plan (And Why)
It is far too easy to fall into the trap of believing that customers will love what we are building. That is why we go to the effort of breaking down our solutions, identifying assumptions, and then trying to evaluate them. But there is still one hurdle left to overcome - our bias.
Creating meaningful experiments is harder than it seems. Have you ever gotten feedback, whether it is for a product or just in general, and then immediately gone on the defensive?
“They don't understand what we're trying to achieve”, “they're not seeing the big picture”, “the experiment was flawed”, “they're not the target audience”. It's a natural reaction, but it's also a missed opportunity. Most of our ideas will fail so we need to be ready to hear it. I’m not saying that you should accept bad experiments, but the time to debate whether the experiment design is correct is before you do the experiment, not after it.
The other challenge is that we don’t learn solely by doing, we learn by reflecting on what we’ve done. This means that we need to have a clear idea of what we expect to happen and then, if the actual results are different, we need to query which of our initial assumptions were incorrect so we can adjust for next time. You might think that if you’re running short experiments, you don’t need to write down your expectations and reasoning because you’ll remember them. But here comes the tricky part: it is impossible to truly remember how we thought before gaining new knowledge because learning alters our neurons.
The Neuroscience of Learning and Memory
When you remember something, your brain activates the same neural pathways used when you first experienced it. Every time you remember it you are strengthening those pathways. But when you learn something new, your brain creates fresh neural connections. Once these new pathways are formed, it becomes nearly impossible to remember exactly how you thought before because those original neural pathways have been modified or replaced.
How this manifests is in Hindsight bias - of course this is what would happen. But this limits our learning because we haven’t reflected on what we originally thought was true and why that was incorrect. This is why documenting our experiment plans up front is so critical.
The Essential Elements of an Experiment Plan
To combat these challenges and ensure meaningful learning, a well-structured experiment plan should include:
Hypothesis: A clear, testable statement of what you expect to happen and why. This should be specific enough to be proven wrong and include both your prediction and the reasoning behind it. This is what will drive your learning if the experiment fails.
Target Audience: Detailed description of who you're testing with and why they're appropriate for this experiment. Include relevant demographics, behaviours, and contextual factors.
Success Criteria: The metrics that you will track as well as the exact threshold for success. This should also include the number of experiments that you plan to run.
Timeline: Establish a start and end date for the experiment. One of the biggest temptations is to stop an experiment early or let it keep running, depending on whether it is giving you the results you want. Defining dates upfront protects you from yourself.
Method: Detail how you’ll execute the experiment, including control measures and how you'll minimise bias.
Procedure: The step-by-step breakdown of the tests or the task plans that you have created to execute the tests.
Materials and Tools: List the resources, tools, or platforms you’ll need to conduct the experiment.
Budget: Outline the resources available, including time, money, and personnel.
Real-World Example: ProjectAI Experiment Plan
Let's look at how this works in practice with ProjectAI, our fake project management tool for agencies that automates resource allocation and streamlines client approval workflows.
Hypothesis
We believe that providing transparency into AI resource allocation decisions through multiple options (rather than a single solution) will increase trust and reduce time spent double-checking the AI's work. Specifically, we expect that when users can choose between 3 clearly explained allocation options, they will spend less than 5 minutes making a decision and proceed without manual verification of the chosen solution.
Target Audience
Project Managers with 3+ years experience
Managing teams of 10+ people
Currently using traditional project management tools
Managing at least 2 concurrent projects with shared resources
Working in digital agencies or software development companies
Success Criteria
Primary: 80% of users select an AI-proposed option within 5 minutes
Secondary:
Less than 20% of users attempt to manually verify or adjust the chosen AI solution
70% of users can accurately explain why the AI made its recommendations
85% would feel comfortable explaining the chosen solution to their team or client
Zero instances of users creating their own manual solution instead of choosing an AI option
Timeline
Week 1: Participant recruitment and screening
Week 2: Conducting experiments and analysis
Method: 20 moderated usability tests
Procedure: Link to Task plan (not completed in this example)
Materials: ProjectAI set up with manually created resource allocation options. Pre- and post-test questionnaires
Budget: $100 per participant × 20 participants = $2,000
Anti-Patterns to Avoid
Vague Hypotheses
A poorly defined hypothesis, such as “users will like the feature,” offers no actionable insight. Be specific about what you expect to see and why.Unrealistic Expectations
Overly ambitious success criteria can set you up for disappointment. In early experiments it is ok to set low thresholds to help the experiment to pass - you can make the threshold harder as you continue to refine your offering.Skipping Documentation
Hindsight bias is real. Without documenting why we are running the experiment we will limit what we can learn.Ignoring Negative Results
Negative outcomes are not failures - they’re valuable learning opportunities. Use them to iterate and improve.
The Value of Planning
By documenting all these elements before running the experiment, we create a foundation for genuine learning. When the results come in, we can compare them against our original expectations rather than retrofitting our interpretation to match the outcomes.
This approach helps us:
Avoid confirmation bias
Build institutional knowledge
Track our evolution in understanding
Make better decisions about product direction
Remember, the goal isn't just to run experiments - it's to learn from them. A well-crafted experiment plan ensures that learning is based on evidence rather than post-hoc rationalisation, leading to better product decisions and more valuable insights.
Without this structure, we risk falling into the trap of seeing what we want to see rather than what the data actually tells us. The time invested in planning pays off in the quality and reliability of our learnings.