Improving how users search & a culture of experimentation at Whirlpool
CRO and experimentation ROI is (and likely will always be) one of the most hotly debated and divisive topics in the field. However, companies like Microsoft, TaylorMade, and Spotify know that testing is a highly effective way to develop a product and innovate. Moreover, these companies understand that sustaining a successful practice will positively impact their bottom line.
Thankfully, I have noticed a shift in conversation from asking ‘why should we invest in experimentation?’ to discussions centred on the best way to communicate the overall value of experimentation to your organization. In this guide, I definitively address common objections with real-world applications that help communicate the ROI behind building your practice.
The right side of the bet
In the gambling industry, the phrase “the house always wins” is a popular adage. There’s a very good reason for this. While the casino doesn’t necessarily profit from every game played, the odds are tilted in its favor so that across a large volume of games played, the cash always ends up back in the coffers. Players may walk out of the casino with more or less money than before, but the average person leaves with less.
Experimentation is not all that different from the casino in this case. The majority of companies may not have a winner for every experiment, but with enough tests, the odds are good that you will come out ahead.
In his book “AntiFragile,” author Nassim Taleb introduces the idea of convexity. Without being too tangential, convexity essentially describes a system that benefits from disorder. “Disorder,” in this case, represents an unpredictable input that could cause something to either gain or lose randomly. When a system is convex, it receives more benefit in what it gains than it loses when harmed.
So what does this have to do with experimentation and CRO?
The beauty of experimentation is that the downside is capped (you don’t ship a losing experience), but the upside is infinite (there’s no limit on how much you can lift a conversion rate, even >100%).
With an extensive upside and limited downside, CRO is a great example of a convex system. The “disorder” in this example is every time an A/B test is introduced to the system. Each A/B test (called a trial) creates uncertainty, and thanks to convexity, enough trials lead to positive results over time.
The house always wins.
One of the most humbling things about experimentation is that it’s impossible to predict the outcome of changes made to the user experience. After all, if this were possible, there would be no need to test.
If we accept that we have no control over any individual test output (the result), we should focus our attention on what we can control—the input, research, process, design, etc. While even the most researched test still can’t guarantee a positive outcome of a specific test, it is possible to increase your chances on average with enough rigour. In baseball, even the best hitters of all-time bat below .500, but the top players prepare before their performance to bat at a higher ratio than other players.
So what can we control?
How to calculate the ROI of your CRO program
To calculate an ROI , there are four levers in every experimenter’s tool kit.
With these four levers, we can create both a payoff and a cost function.
The payoff equation:
The cost equation:
For example, let’s use the following values
- Velocity: 2 experiments/month
- Quality: 30% win rate
- Impact: $25,000
- Cost: $5,000
This gives us a payoff $15,000 (2 * 0.3 * 25,000) and a cost of $10,000 (2 * 5,000)
The number that matters when calculating your ROI is looking at $15,000 (Gain) – $10,000 (Cost), which gives us a Net Gain of $5,000 per experiment.
The velocity, quality, impact, and costs will vary month over month in business, but overall it is clear that your company (the house) comes out ahead. If you walked into a casino and there was a game that gave you on average $5,000 per spin you would not want to leave that table.
Note: This model assumes that all levers are independent of each other, which is unlikely to be accurate, allow me to explain in the following sections.
Let’s take a look at these controls in more detail because realistically, each company has varying degrees of control over each lever.
Velocity is strictly a function of traffic and conversion rate. Simply put, the more traffic and the higher your existing conversion rate, the more experiments you can run. The increased number of tests can be attributed to the statistical nature of testing. To prove causality (whether a change in behavior was due to your test), we need to test to an acceptable limit of statistical confidence (typically 90-95%). Every website in the world has a limit on how many tests it can support statistically. If you want help with this calculation for your website, then send us an email and we’ll be happy to help you out. If you are testing below this number today, you may have room to increase this lever and improve your payout function, but if you are already at the cap, then you are maybe out of luck in this category.
To give you an idea of scale, most organizations we see are running between 25-50 experiments per year, but the top testing companies in the world (Ex. AirBnB, Booking, Microsoft, Google) are running >10,000/year.
You read that right… >10,000
Quality is a category that practitioners have a fair bit of control over. Typically it’s normal to see win rates anywhere between 10% and 40%. If you happen to have win rates above or below this range, this is a good indicator that you are doing something wrong. Too low indicates you may not be spending enough time grasping your customer’s challenges, and too high indicates you may have questionable statistical validity or that you are not testing enough innovative changes. By increasing your understanding of the customer’s problem through analytics, customer research, and psychology, you can improve the quality of your metrics. In addition, rigorous processes, effective design of experiments, and quality assurance (QA) can positively affect win rates.
Conversion has spent years defining our rigorous process that consistently produced win rates fluctuating around the 40% mark for our clients. Some mature testing programs have reported ever-declining quality win rates as they optimize the business with thousands of experiments per year. However, our experience at Conversion has not shown this same decline. In fact, our data illustrates that win rates actually increase over the time as we gain insights about our client’s customers.
Impact is almost exclusively controlled by a company’s size. As a matter of fact, experimentation is increasingly more effective the more revenue a company generates through the properties it tests on. A 2% lift for a small business may generate an uplift worth $150,000 over the course of a year, but that same 2% lift for a large enterprise may be worth >$2,500,000 over the course of a year. Same test, same lift, same effort, much more impactful result.
If you think back to our example, you can quickly imagine that going from $25,000 to $1,500,000 (2*0.3*2,500,000) per winning test will impact your payoff function dramatically. Running an experimentation program is both a responsible and beneficial business decision because while the impact grows exponentially, the cost function stays roughly the same (It doesn’t cost a large organization any more to run a test than a small one). If we think back to convexity, this essentially means that larger organizations are even more ‘convex’ than smaller organizations. Naturally, this makes experimentation a more critical component of their business strategy.
The industry standard to calculate the impact per winning test is to project the winning variation’s lift out for a one-year duration and then apply a discount to that projection (typically between 10-30%) to account for false positives and other contamination issues that arise naturally when testing. When it comes to financial projections, it is better to be on the conservative side.
While organizations have some degree of control over experiment cost, it can be a bit difficult to calculate. Cost is going to be a blend of the hours (salary) spent creating experiments, the technology license used to conduct the tests, and any consultants or partners who assist in test creation. When considering all these inputs, I would expect each experiment to cost somewhere between $3,000 – $15,000. This will also vary wildly from test to test based on technical complexity and research that went into the creation of the idea. At Conversion, each test is scored as small, medium, or large and each will have varying internal costs.
When you consider that the cost of an experiment is unlikely to vary more than $10,000, while impact can vary 100’s of thousands of dollars, it’s clear cost is not the key lever that leads to a higher payoff.
The real benefits of experimentation and CRO
Now forget everything I just told you.
The advice noted in this article is predicated on your business being of a minimum size.If your company falls into this criteria experimentation WILL BE an ROI positive strategy. With enough trials (tests) it is nearly impossible to lose money doing experimentation.
Even with such substantial ROI potential, the most valuable part of conducting testing has far greater benefits.
Let’s take ROI off the table, and take a closer look at the real benefits of experimentation.
Ego is the Enemy
A company void of ego is freeing. People can do their best work and challenge assumptions when the default answer in your organization is “let’s ask the customer”. Experimentation is a great way to remove bias from key decision makers and democratize voices throughout the organization. The person who gets paid the most doesn’t know more than the customer — Letting customers vote with their wallets will lead to the creation of better experiences. A person’s rank in a company is not strongly correlated to their ability to predict test outcomes. A good experimenter is not better than others at knowing answers. A good experimenter is better at finding answers. Enabling all ranks of an organization to find answers is an incredible, competitive super-power.
Winning both sides of the bet
Traditionally in product development and marketing, the alternative to running an A/B test is to add features to a central roadmap that get released periodically. At Conversion we like to make the distinction between “optimization” and “validation”. Optimization is an improvement to an existing feature that would not be there if not for an optimization practice. Validation is the testing of a new feature that was planned to be released as part of the product roadmap. While optimization is the practice of continuous improvement of existing features, there are still many times where you want to introduce new features as the result of a broader creative vision. In these cases these changes may be part of a broader strategy and may not be ROI or KPI positive changes.
The benefit of applying experimentation to the product release cycle is that you can quantify the negative impact of not shipping features that are detrimental to the user experience. When evaluating the financial value, there is no difference to the business if you optimize an experience to generate $100,000 additional dollars in revenue, or if you didn’t ship a feature that would have cost the business $100,000 in revenue because of a poor user experience. We refer to this as ‘loss aversion’ – A good program should be able to quantify both additional uncovered revenue, as well as losses averted.
Creating a Safe Space to Innovate
Fear of failing is what prevents a lot of organizations from taking the necessary risks that lead to innovation. Experimentation provides a safe framework to test (and fail) innovative changes while being able to measure effects in detail, limit exposure, and roll-back changes swiftly. The cultural benefits when creating a safe way to try new ideas and fail can be substantial. People will be more willing to voice ideas and take (measured) risks. The main benefit to the executive team is that they can be confident in their decisions—
Every test result offers a risk-weighted recommendation. With this data in hand, the decision maker is able to assess risk and reward.
Reduce the Cost of Development
This might sound counter-intuitive, but let me explain.
Adding more tests should surely add to development…. right?
However, if you are in an agile organization, experimentation should be sitting before your product roadmap. Experimentation is the MVP (minimum viable product) method for determining whether something is worth spending development resources on. The mistake many companies make is that they put all their resources into building perfect, fully functional experiences and then proceed to test them.
The difference is that world-class testing organizations test roll-outs, but they also validate ideas earlier on in the process with MVP experiments, or other pseudo-experiments to gauge whether changes warrant continued effort from the development team. A good testing practice ensures engineering is only working on critical and impactful features for users. Nothing will kill development team morale faster than spending months on an experience only to have that work disappear due to a bad test result. Fail early, and fail cheaply.
A Megaphone for the Customer
These days, many organizations call themselves “customer-centric,” but few put this into practice. Enabling a culture of testing is one of the easiest and more systematic ways to become customer-centric. Give your customers and prospects a seat at your executive table by running decisions by them. A well-structured series of experiments can reveal incredibly rich information about the preferences of your core customer—You just have to ask.
CRO adds compounding value
With traditional marketing campaign spend, your organization loses value when you turn off your spend. With experimentation, every winner that you uncover adds to an increasingly better user experience. Each time you uncover a 10% winner, that benefit doesn’t go away when the test is over. You may see the impact diminish over time as the competition adjusts, but the beauty of experimenting is that you can stack these winners on top of one another to drive significant value over time. CRO acts much like a capital investment where you pay for it upfront but you receive an ongoing dividend quarter over quarter.
Finally, assess if you can afford to continue:
- Allowing gut decisions to drive your roadmap?
- Assigning development resources to underperforming features?
- Not learning from customer behaviour?
- Stifling junior-level employees’ ideas and contributions?
- Putting your business at risk by not testing various user-experience changes?
The next time you are asked about the ROI of testing, ask yourself, “what is the cost of not testing?”
Becoming an exprimentation-driven company will improve your ability to innovate, grow, measure, learn, brainstorm, and become customer-centric. The cost-benefit analysis goes far beyond ROI and will impact the entire culture of your organization when you commit to the process.