In my experience with building products so far, I have mostly used analysis to find out where the problems are. But I have not been able to find out why the problems exist solely on the basis of . The I take are generally based on a mix of ( analysis) and customer research (qualitative analysis). This is applicable to almost any of the product and growth related projects I have undertaken in the past such as increasing conversion, reducing churn etc.

Let me elucidate this with an example.

At BrowserStack, I led a growth project to increase conversions of our product, Live, that helped customers test their website manually on various browsers. We defined conversion as the ratio of users who purchased a subscription to those who visited the website. The first thing I did to get started on this project was to draw out a simplistic conversion funnel for Live.

Here’s a view of the steps that made up our funnel:

Conversion Funnel for BrowserStack Live

Most of the steps above are self-explanatory. Without getting into too many details, let me just state that we defined Step 3 to include some basic product usage such as testing your website on 1 or 2 browsers. Step 4 was defined as usage of a few key features of Live (such as testing on mobile browsers, usage of a feature called Local Testing etc.); a key feature being one in which the probability of conversion of a user who tried the feature out was very high. We were able to figure out the key features among the multitude of features the product had by comparing the pre-purchase behavior of two cohorts from the same time period. One group constituted those users who purchased a subscription and the other group was made up of those users who did not purchase a subscription.

With the funnel defined, we then had to figure out the bottlenecks in the funnel. Did most users drop off between Steps 1 and 2? Or did most users drop off between Steps 5 and 6? Quantitative data was able to help us identify the bottlenecks in the funnel within seconds. Based on data analysis, we were able to quickly identify that most users dropped off between Steps 3 and 4 and between Steps 5 and 6. This meant that there were probably two issues we had to fix:

  • Perhaps the product lacked certain features or maybe the users were were not able to discover the key features that would convince them of the value of Live. As a result, there was a massive drop off between Steps 3 and 4
  • There seemed to be some problem with our pricing that users were dropping off between Steps 5 and 6. Were we too expensive? Was our pricing complicated? Did our pricing not address all the various segments/personas that the product catered to?

Notice the inference I was able to draw based on the quantitative analysis.

I was able to identify the bottlenecks we had to fix in the funnel in order to increase conversion. But I had no idea what product initiatives I should actually undertake to fix the problem. This is where qualitative analysis helped me.

I then decided to email our users who dropped off between Steps 3 and 4 and Steps 5 and 6. My email was succinct and included a single open-ended question:

To those users who dropped off between Steps 3 and 4, the question was, Did you face any issues with the product?

To those users who dropped off between Steps 5 and 6, I asked, Did you face any issues with the pricing?

I sent hundreds of such emails and about 20% of the users replied to my email. Through a mix of follow-up emails and phone calls, I was able to figure out exactly why the users were dropping off. I realized that:

  • Our pricing was complicated and had to be revamped
  • We had to offer real mobile devices for testing as opposed to mobile emulators
  • We were also giving away too much value in our free trial as a result of which users just kept creating more free accounts in order to use the product
  • Our overall speed of the product should have been faster as our users expected the performance to be equivalent to that of launching an OS and browser on their virtual machines

This simple exercise was instrumental in driving our short-term and long-term product roadmap that would result in an exponential increase in the conversion rate. Had I stopped at quantitative analysis, I would have probably created a long list of growth experiments that we could undertake to increase conversion. But this would have been tantamount to shooting arrows in the dark. We would have had no idea about the probability of success of those experiments. We wanted to be highly confident about the success of any initiative that we undertook at BrowserStack as it would help avoid investing efforts in projects that had a low chance of success to begin with. This probably goes against the Silicon Valley adage of launching hundreds of experiments in the hope that some of these experiments work out. But for us, research-based decisions were a better strategy than hope.

As a result of this exercise, we were able to take product decisions that exponentially increased our growth over the next few months. Quantitative analysis was instrumental in helping us understand where the problems were. But without qualitative analysis, we would never have been able to figure out how the problems could be solved.



Source link https://uxdesign.cc/why-quantitative-data-is-not-always-enough-to-take-product-decisions-ca90e052fb1c?source=rss—-138adf9c44c—4

LEAVE A REPLY

Please enter your comment!
Please enter your name here