“We should test that to find out for sure…”
Words often heard in group discussions where the solution to a problem is less than clear. Despite a team’s desire to get real answers to real questions, many lack the research expertise to do so. To remedy this, product managers, designers, and developers stretch their skills to formulate and execute tests—what I call scrappy testing.
In my experience with scrappy testing, however, it can be pretty easy to make basic mistakes. Moving too quickly and leaping right into a test can lead to sloppy results that aren’t actionable — believe me, I’ve learned this the hard way. Gaining a shared understanding amongst your team of what is being questioned and how it’s being tested is required. Over time, building this understanding lead our team to develop a simple template to reinforce key considerations in advance of a test. Let’s have a look at those (and be sure to download the template included at the bottom).
What is the goal of our test?
You should be able to state, simply, what it is you are looking to find out. Here’s an example: “Determine whether reducing the number of questions on the form will result in a higher completion rate.” It’s simple, it’s singular, it’s measurable. If you find yourself listing three or four bullet points, you should reconsider the scope of what you are trying to accomplish and focus your team on a single question.
What assumptions have we made?
Take a minute to gut-check yourself. Do you have assumptions that, if not true, would heavily weigh on the outcome of your test? Take time to document those as others may not be aware of them.
What does success look like?
It is critical to have a clear understanding of what defines success. This sounds obvious, but it can be easy to botch. Your success criteria should be measurable and leave little room for interpretation. Most importantly, having clear success criteria will help you understand the mechanisms you’ll need in place to measure the performance.
What actions if the test succeeds?
Have an idea for next steps should your test succeed. Measuring is one thing, knowing what to do with the results is another. Presumably, you’re testing something in order to move forward in a particular direction. Document that. Make sure your team knows what actions this test will lead to if successful. This question also helps you identify operational tasks that need to happen once complete. For example, you should probably remove your test code at some point…right?
What actions if the test fails?
Consider also what your next steps are should your test fail. This question is my favorite. Rarely will your predictions on how a test will perform be correct. Having a loose plan for a failure scenario before a test is run will allow you to execute on an objective course of action should things not roll your way. In some cases, asking this question may actually help you realize how committed you are to solving a particular problem. If this is the first test and the answer to this question is “We’re just going to move on”, you may want to reconsider if there’s something with more business value you should be doing. It may take a few tries to get the solution right, so the outcome should be worth the effort.
Who will benefit from what we learn in this test?
The information you are uncovering is valuable. It’s also easy to sit on. Other teams across your organization will benefit from the data you’ve uncovered. Senior leaders will benefit from having data points as they talk with big customers and strategic partners. Account service and sales folks can back up the value of new features. Marketing teams will incorporate your results on various sales materials. This data is an asset with a value well beyond the walls of your team. Share it. Spread it around.
Asking these questions is not intended to be a long, laborious exercise. Rather, a quick, centering exercise meant to make the most out of your testing time. Here’s a simple template to serve as a starter for your next test. Build clarity amongst your team and improve the effectiveness of your experiments. Happy testing!