With my first A/B test in 2014 and at-least 100+ up till now across different industries, I have had the fortune of some good artistic struggle in extracting true value out of such campaigns. Consequently I have discovered repeatable process ideas along the way which have usually not been discussed in most A/B testing articles on the internet. This article is about those learnings.
Let’s look at some of those challenges and solutions in following scenarios.
Scenario 1: Over testing of one prominent feature
A website even if just for a single product or purpose, is supported by other factors that will play into user conversion and experience- both visible and not so visible.
Landing Pages and Request for Information forms are often the big components that are tested repeatedly after simpler CTA and images tests. This is especially true for B2B companies and mostly all B2C companies that are driven by large single purchases instead of FMCG based websites; for example car dealerships, education, overseas travel and so on. Since RFI forms are direct indicators of user intent, marketing teams justifiably play around with a combination of questions, sequence and staging of RFI forms. Many teams tend to chase the lifts in RFI form KPIs and downplay other aspects of the website. Alternatively, teams that follow a cycled approach to site conversion can quickly optimize their organization’s website and associated digital properties. A small test cycle is discussed below:
- Incremental RFI Tests ( 2–3 tests at max)- If the very first test gives you a lift, great! Implement the winning change and move on to next part of the website. If not, we still need to move to other section after 2–3 tests and return to test new RFI variations again.
- Optimizing journey after RFI submit– Depending upon your business type, there’s likely an additional opportunity of how you can continue to connect with the customer and leave them with enhanced experience. For example, in case of an automaker, once an RFI or a meeting has been setup- you can offer them a review video of the car, inform them about some potential freebies at local dealerships or other unique features of the car. There’s a lot of potential here for every industry and business type.
- Testing the not so visible components of your website- A/B testing is not just limited to gauging visitor’s reaction on visible aspects of the website. A high performing team can (and should) play an important part in shaping up the navigation structure of any website. Aspects like pathing of the website to key products page, features, experimenting with the menu options can create more enduring connects with visitors and subsequent conversions.
Scenario 2: Book-keeping of Campaigns
Let’s take a leaf from accounting and maintain a profit-loss statement of all tests done quarterly or annually
It’s very common to test, implement and forget. Its rare for teams to maintain a record of the tests that were conducted in last 6 months or an year back. While a/b tests are usually short term projects, a combined reflection on all test campaigns that happened in last 6 months or an year can give enormous amount of learning. It also minimizes the reliance on the next rock-star idea and gives marketing team an ability to plan for ‘seasonal’ tests. It also helps in sequencing the components that you want to test, rather than an unorganized format or over-testing of one idea.
To do this, a simple pro-active approach of recording the final result of each test in a simple excel or shared document can be an old school way borrowed from accounting to manually record a profit/less result of each test. It would cost nothing except for an extra manual step with discipline. In the excel shared above, once can easily rename columns to record the performance of past campaigns, their sample size, running duration and other keys factors.
Scenario 3 : Testing Calendar
Flexibility is a valued trait, being fluid and able to test new ideas is appreciable, but not at the cost of discipline of process and timeline of a testing calendar every-time. Maintaining a balance would be the key to the overall optimization process and success.
Having a testing calendar is the other side of book-keeping of the tests. As the team begins to mature, they usually have some sort of calendar planned for tests. However the challenge lies less in following it with discipline.
Often, the ideas and timeline decided in the start of a quarter or year (depending how your organization plans) are left on the sidelines to test for other ideas. Two common reasons for doing so are :
- Change in Management or Leadership emphasis: This often happens based on some new performance report, some new feature launch or simply because things weren’t going right. While in a few cases, a more flexible approach is required and we must let go off some pre-decided tests to make way for the ones based on new learnings. However, it’s important to remember not to abandon all test ideas that were decided earlier a spart of a larger strategy. An internal priority order can be maintained and low priority ideas can be phased out for new ones. The intent here is not to undermine the flexibility within teams , especially as this is a very ‘test and learn’ exercise, but to strike a balance between long-term strategy and possible quick wins along the way.
- Constraints by IT Resources: This is more of an operational issue. It can be ‘planned’ effectively unless otherwise there’s no resource at all and you need to recruit someone. Popular A/B testing tools like Optimizely, VWO, ABTasty or an integrated add-on with other marketing tools can help minimize the dependency on IT, but again there can be many tests that would need IT help. As an indicator, any test that would require testing of pathing ideas, menu, integrating FAQs, chat etc. might require more of IT help than just a tool.
Scenario 4: Analysis of Campaign
An over-discussed phrase but under-utilized in essence, it’s important to be neutral in the analysis process and be thorough at the same time.
Many articles on internet talk about sample sizes, duration of test, confidence interval, key KPIs to look for and some standard device or channel segmentations. These are subject essentials that we must follow, but this alone won’t spell out an effective story out of the analysis.
Working with different analysts and managers, I have sensed that often a lot of us (and even business leaders) tend to not go or demand deep enough analysis when the results are in favor of the variation we want to win. The reverse also almost always happens, that we can go overly deep defining very fine segments to find favourable indicators for the variation we or the managers want to win. As analysts it would be on us to approach the analysis with a very neutral standpoint and be thorough about it.
One of the ways I have found useful to go about this, is coming up with an approach document of what I internally call as ‘Measurement Plan’ when the test is in design or implementation. Collaboration is fundamental to an analyst’s job. Depending upon how your organization works , you can share a prospective measurement plan with respective people to solicit their feedback. Even if woking just by yourself, a plan on paper helps to reflect back when actually analyzing the result and avoids any re-work.
Scenario 5: Taking a Decision & Post Implementation Monitoring of the Result
Customizing industry standards to your own organization remains the key. Also the last 5% of the testing process remains to be finished until you monitor the results.
There’ll be times when the result from a test will not be clear and there can be a round of voting for recommendations. In such situations, its usual for everyone to recommend and advocate for past industry standards. While they are a good reference point, I believe they shouldn’t be considered as a holy grail to be relied upon. Simply because many times they are too old to be relevant. Also what may apply to your organization can be different to what have been recommended for an industry. Hence understanding your immediate business objective, your own audience and customizing a recommendation accordingly is the key in such situations.
Finally, whatever is that get’s implemented- do monitor! It’s the simplest, most intuitive and yet the most easily overlooked part of the process. By this time, once can feel like this test is completed and starts preparing for the next test. But this is still just 95% complete and the last 5% of the process still remains.
Normally winning variations would behave as expected when deployed to 100% traffic, but occasionally the performance might actually be not that good for a multitude of reasons. So it needs to be monitored on a consistent basis post 100% launch and updates be shared with all stakeholders.
Conclusion– There are many other points but I’ll rest my cases here. Hopefully some of the points discussed above would connect with you and help you in your on-going testing process. Many of you will have different opinions and they might be more effective than the ones discussed in this article. So it would be great if you can share any feedback or process ideas that have consistently worked for you. I would love to have any discussion on the subject and learn from any consenting as well as dissenting point of views. After all that’s what A/B test campaigns are all about.
Thanks for your time!
P.S. This article was first published on LinkedIn here