Vanity Metrics In Experimentation Programs pt.2

In the previous article, we talked about how experimentation programs that set Number of experiments as a metric to determine the health of their experimentation program might cause more problems.

This is the second article in the serious and probably going to have you scratching your heads as to why this is a vanity metric.

Let’s just call it outright. The next in the series of vanity metrics in experimentation programs is..

Conversion Rates / Conversion Rate Uplift

Shocking as it may sound, we have seen this is a metric that promotes all the wrong behaviours.

To understand why, We should look back at how the industry has arrived to where it is today.

In the past, a lot of vendors and industry “thought leaders” pushed the idea forward that it was easy to create A/B tests and with minimal effort improve the conversion rates of the website.

The idea was simple.

More uplift = more revenue

This had business stakeholders salivating and CRO was born, seen as a money printing machine. Even though attribution of revenue to actual tests were sketchy at bests, this idea has stuck.

Conversion Rate improvements and uplift have been seen as the hallmark of a successful experimentation program. After all it’s easy to monitor and see if it’s improving or not.

But herein lies the problem – their experimentation teams are not testing to learn. They are testing to win.

Failed experiments may be lauded in the practitioner circles but they won’t be made visible to the C-level.

When winning is the only way, the teams feel the pressure to achieve this.

These are the real consequences

  • They will run tests that are more guaranteed to win.
  • Tests that don’t have revenue attribution possibility doesn’t even get prioritised.
  • Learnings are never addressed or shared.
  • Tests become token activities with only the outcome in mind
  • Data analysis is misreported and made to look like wins.

In one real world example, we talked to a CRO manager for the retention team of a leading software business.

“We had to only prioritise tests that had a direct revenue attribution”

Imagine how many learnings this organisation missed out on when trying to improve the retention of the customers. They looked at retention as this single event when essentially, it could have been multiple touch points that could be improved to secure retention.

How do we improve this?

To answer this, let’s try and answer what experimentation really is.

It’s a way of validating or invalidating a hypothesis. It allows the experimenter to analyse the data and build a set of learnings.

The insights gained from experiments should be the main reason any company should invest in experimentation program.

Insights are what will essentially drive innovation and strategies that create a competitive edge.

The better metric we are proposing is – Actionable Insights gained from experiments.

What is an actionable insight?

Any insight that creates business impact and/or helps the experimentation program contribute towards new products, opportunities, channels or competitive edge is what we would call an actionable insight.

By keeping track of experiments that create actionable insights means that the teams spend time better analysing the experiment and providing better insights and not just “meaningless numbers”.

It’s worth mentioning that you need to have guardrails and review systems in place to monitor the insights from each experiment to independently verify and validate the impact. This is not just a numbers game.

Manuel da Costa

A passionate evangelist of all things experimentation, Manuel da Costa founded Effective Experiments to help organizations to make experimentation a core part of every business. On the blog, he talks about experimentation as a driver of innovation, experimentation program management, change management and building better practices in A/B testing.