Six reasons organizations fail at expanding Experimentation Programs
Experimentation or rather CRO is widespread in companies that use digital channels as a way of attracting and retaining customers. It’s no more a nice-to-have and companies are actively investing in hiring and growing these teams. Even though there may be pockets of companies where they don’t actively invest in this area, the general trend has shown an increased appetite and growth. What’s even better is these organizations are starting to see the value in growing it beyond the core team that runs experiments. They want to enable experimentation in all parts of the organization. From a practical point of view, this means that other teams in the wider company, like product owners, are empowered to run their own experiments.
The growth of experimentation can be a good thing if done right. After all, more experiments = more learnings and potential outcomes (or increased revenue for programs that myopically focus only on that).
The reality is very different. Through extensive research that Effective Experiments has carried out, we have seen that programs fail to get going and get stuck on a hamster wheel of sorts. There is superficial progress, but scratch beneath the surface and you start to see a raft of problems that plague the program.
There’s a term in physics called Escape velocity.
It’s the minimum speed a rocket needs to get past earth’s gravity and break free of the orbit. If a rocket doesn’t reach that escape velocity, it simply will get stuck in orbit circling ad infinitum.
In this article, I want to share with you how experimentation programs get stuck and fail to expand in a meaningful way beyond the vanity stories you hear about on LinkedIn and at conferences.
No strategic plan for roll out
When the words “We want everyone in the organization to experiment” are uttered, Optimizers, CROs & Experimentation Specialists get very excited. It’s their turn to shine and share their knowledge to the wider organization. Getting others to learn more about what they have been doing all this while is an exciting proposition. The fact that these new teams will also be doing tests is even better.
However, as a result of their enthusiasm, the way optimizers go about this results in half baked training sessions. They mainly focus on highlighting the technical aspects of building and running tests peppered with their own success stories. The workshops, talks or roadshows are all done P2P (peer-to-peer) with optimizers talking to people at their own level. They dart from one team to the next expecting the same level of enthusiasm towards testing as they bring to the table. Different teams have different capabilities and understanding.
When there is no strategic plan in place, it results in a spray and pray approach. It’s no surprise then when the enthusiasm in the wider organization fizzles out and they go back to the status quo after a few weeks or months.
Led by the wrong people
CROs are not the right people to lead business transformation.
Experimentation program management, rollout and evangelization are not technical in nature. They are part of a complex change management process that will transform the way the business makes decisions. This involves people, their motivations and goals, and their capacity to accept or push back on new initiatives.
Optimizers are technicians and amazing at what they do – the creative ways of coming up with test ideas, analyzing research, building and planning tests – but when it comes to skills related to project management and internal sales & marketing, they are found lacking. This isn’t an indictment on the ability of the optimizers but more of a reality check that you’re asking them to have a variety of skill sets that are in their own way a specialist set of skills.
Experimentation should really have someone leading it in the C-level and it’s missing people with the key skills to drive change and monitor the continuous progress (but we’ll cover this in another blog post). If the responsibilities of leading change are given to a technician, the outcomes are predictable.
No intent & motivation
As mentioned before, CROs will be very enthusiastic about the prospect of evangelizing experimentation and will talk a lot about the technical aspects of it.
However, to really drive awareness and enthusiasm, one must really understand two things:
- the dynamics of an organization
- how to make it appealing to the individual or unit so that they want to be part of it
When optimizers try to convince other teams on a peer level, they are expecting them to take responsibility of running tests when the remit to do so will not be there. This then becomes additional work for the new teams who see it as such. This becomes an optional task as there are no consequences for not doing it. There are no requirements to do it either, less so in the correct way.
This is because the optimizers don’t really have the authority to drive this change. It’s easy to dismiss or deflect their requests for change because the other teams don’t have anything in their OKRs or KPIs that require them to report back on this.
They have no motivating factors to run the tests or to change the ways they work. They may have been shown how to run tests but what is missing is WHY they need to run tests. They are missing the WIIFM (What’s in it for me?) and the optimizer, being a technician, never spent adequate time in understanding the team’s motivating factors.
Lack of process – onboarding, follow through & sharing
Organizations are always keen to give autonomy to teams to allow them to reach the outcomes however they see fit. This is appealing to the teams but creates problems for the organization when it comes to scaling.
Autonomy without proper guardrails manifests itself as chaos where different teams do whatever they see fit.
When onboarding new teams, it’s not enough to show them how to test and what tools to use. You have to show them the right and wrong ways of going from idea generation to running an experiment and beyond. What are the best practices? What are the areas to be mindful of?
If teams decide to use their own processes, seemingly harmless bad practices can set in and consolidate over time. This results in a normalization of deviance which is harder to untangle as time passes by.
Onboarding teams doesn’t end with training (no matter how many sessions you run). Even with the most detailed onboarding, you will notice teams slowly cutting corners and falling back into their old ways of working.
Every change management model factors this in. Reinforcing a new behavior takes time and needs a constant feedback loop to support it.
Lack of governance
Optimizers who are tasked with driving the change see onboarding as a tick box exercise. They already have a lot on their plate in terms of running their own tests. Monitoring and providing constant feedback to another team is going to be beyond their available bandwidth.
Unfortunately, this is where teams start slipping through the cracks and deviating from the initial goal. It’s partly down to the previous points of lack of motivation and process but there’s something else lacking – Governance.
Governance is the single most important layer in an experimentation program that will determine if it can scale predictably or grow through chaos. A governance model can define the guardrails within which a team MUST operate when ideating and running experiments.
An example of governance can be an Experiment Scorecard. Reviewing an experiment to see if it followed the necessary processes in ideation, prioritization, data capture, hypothesis crafting, etc. will reveal whether the teams are doing the work the right way or cutting corners.
Governance must be done independently of the experimentation teams and the consequences of deviating from it must be made clear to all teams. The person or persons responsible for governance should have no vested interest and must report to the person leading experimentation at the highest level.
If these aren’t in place, then you cannot expect experimentation programs to scale long term.
A system that is rigged
For all the talk of growing experimentation and adopting a culture of experimentation, companies are stuck in the same place as a result of their own making – how the entire system is designed.
We’ve often talked about how CRO is a very watered down version of what experimentation is truly about. It all stems from the ways it originated and was sold by vendors and consultants as a way of “printing money”.
If you look at experimentation programs out there, you’re bound to see they’re focused on revenue and conversion rate uplifts – the very thing that you can’t predict or else it wouldn’t be an experiment in the first place.
When revenue uplift, conversion rate uplift and number of tests are the leading indicators of success in an experimentation program, it’s no wonder that an organization doesn’t innovate as a result of experimentation.
When the focus is set as revenue and the expectation is wins and guaranteed uplift, this skews the team’s behavior. They become risk averse and biased towards pushing tests that are more likely to hit their KPIs.
In one case, we saw that the finance team set goals and requirements of tests to meet or surpass a $500,000 revenue uplift projection in a test or it would not be counted or prioritized. In such a case, teams behaved accordingly. They discarded tests that could potentially be innovative but had no revenue potential. This risk aversion is the exact opposite of what experimentation should enable. Experimentation remains stuck in a hamster wheel where teams move from one test to the next just so they can tick a box – completed X number of tests or generated Y revenue uplift.
The lack of engagement or awareness of experimentation from senior leaders means that only select information wrapped in numbers and jargon goes upstream. Whether or not it’s actioned or utilized is a different matter.
Experimentation stays in this status quo because it’s in no one’s interest to change it other than in a superficial way.
Never growing.
Never expanding.
At Effective Experiments, we have built a framework called Experimentation Ops which enables companies to build a scalable experimentation program with solid foundations. This enables growth beyond the core teams and into the wider organization.