One reason we run experiments is that it is difficult, if not impossible, to consistently know if changes we want to make to a website will help us reach our objectives. Quite often changes that are thought would help our site end up being negative for the most important metrics.
The MSN Real Estate site wanted us to run a test to improve revenue for an advertising widget. They had a design company make five new widgets to compete with the current one. (By convention, the default user experience is called the Control and the competitors are known as Treatments. If there are no Treatments better than the Control, best practice is to keep the Control.) The experiment tested all six widgets concurrently by randomly assigning one-sixth of users to see each widget over a two week period. Here are the six competitors. Before you read further, check your understanding by guessing which widget would perform the best.
We ran a survey prior to the experiment for about 60 people to guess which widget would bring the most advertising revenue. Members of the MSN Real Estate team, the design company and our experimentation team participated in the survey. The widget that won is the one that got the fewest votes, Treatment 5. This widget, the simplest one, was statistically significantly better than any of the other widgets for revenue as well as click-throughs and had a 9.7% increase in revenue over the Control.
An experiment is great for objectively testing with end users under natural conditions which option will perform best. With an experiment we can draw a direct causal link and conclude that changing to the new alternative will improve our primary metrics. What an experiment does not tell us is “why?”. For example, in this experiment, did Treatment 5 perform better because it took the least space on the page? Or because it required the least input from the user? Or was less confusing? Or for some other reason? The experiment itself cannot tell us which of these is the reason for the better performance. We can begin to understand why by consulting design and usability experts and by running follow-up experiments that test new alternatives to the best-performing Treatment.
Call centers are a natural place to run experiments because all the elements are present to give good experimental results. (For a list of these elements, see White Paper.) In a typical call center there are potentially hundreds of agents handling many calls per day. Each call can have several quantifiable outcomes related to the objective of interest. For example, the length of each call can be measured if the objective is to improve efficiency. If the objective is to increase revenue, sales per call is measured. Other metrics may include customer satisfaction, return call rate, customer retention, etc.
In some cases several call centers from the same organization will be part of the same experiment which will ensure the results are applicable to all call centers. Industries that use experimentation in call centers include credit card companies, banks, service, retail and online. Experimentation works equally well whether the calls are in-bound or out-bound.
Case Study –Improving Call Center Sales
This organization wanted to improve net sales in their call centers. They had eight call centers where they received calls regarding their credit cards and chose to use three in this improvement effort. In addition to the primary objective of increasing sales they wanted to decrease time on call and improve employee satisfaction since they were experiencing high employee turnover. After conducting a number of brainstorming sessions with customer service representatives (CSRs), team leads and managers the list of ideas was narrowed down to those that were actually tested. The experiment was in three call centers, included hundreds of CSRs, 24 team leads and tested 10 ideas.
The ideas they tested were:
Sales coach availability (coach was ready to coach after any sales call or not)
Unit manager monitoring calls (or not)
Use of lead associates as coaches (instead of dedicated sales coaches)
Operations manager available on the floor (or not)
Use of unit managers as coaches
Increase the time off the phone for call center associates
Increased training to access customer and product information
New hire coaching (or not)
Self-paced training for call center associates (via taped calls, or not)
Self-paced training for call center associates (via Web, or not)
Five of these factors were identified as improving at least one of the key metrics (increasing sales, decreasing call time, improving employee satisfaction). The increase in net sales was approximately four times what Citibank management had hoped the experiment would achieve and resulted in an additional millions of dollars in sales per year!
In addition to improvement in sales, the implemented factors also had a notable positive impact on employee morale and engagement.