Support centers run their business and make critical decisions based on metrics.  However, often times the metrics are not giving leadership a true representation of the state of the business due to both unbalanced and self-reported (biased) metrics.

Whether or not you have kids in school, there's a good chance you've heard plenty of arguments against the practice of "teaching to the test,” where teachers are forced to focus on preparing students for standardized tests, rather than teaching a well-rounded curriculum for the grade level and subject matter.  Higher scores on the standardized tests lead to better teacher ratings and school rankings, but do not necessarily promote the entire education experience required for the student to be successful at the next level.  Imagine if you would, a football coach who only practices punt returns or a basketball coach only focused on free throws.  The team would be great at these single tasks, but would they ever be able to actually win a game? 

It may not be seen as "teaching to the test,” but the same practice occurs in many support centers around the globe when management does not use a balanced scorecard. Instead, agents are encouraged to focus on just one or two metrics, such as Average Handle Time (AHT), Customer Satisfaction (CSAT), Quality, or Issue Resolution (IR).  The agents quickly learn to do well based on the metrics they are being measured against and neglect the rest of what is important to both the business and the customer.  Below are a few examples I've seen.

  • Focus on AHT - A particular computer tech support call center I worked with had dissatisfied customers because of long wait times to reach an agent.  Management realized that agent call handle times were extremely long, so they shifted focus from quality to AHT.  The agents were given shorter talk time targets, which were reported weekly.  The agents knew their performance evaluations would be based on their reported AHT, so their solution was to "fix" the majority of issues by telling the customers to reinstall the operating system and call back if they needed help after the reinstall was complete.  Most of these issues, it turned out, did not need an OS reinstall. This caused a lot of unnecessary churn for the customers (not to mention data loss), simply because the agents did not want to take the necessary time to troubleshoot the true issue.  In the end, handle times were great - but to the detriment of repeat calls, issue resolution and customer satisfaction - costing the company more money than they were saving by shortening handle times.
  • Focus on CSAT - Customer loyalty is very important to the long term success of any business, as dissatisfied customers will not only share their negative experience with others, but also not be repeat customers themselves.  However, singular focus on CSAT can easily drive up costs unnecessarily.  One customer service call center was seeing an increase in negative customer satisfaction surveys.  In response, the primary goal became raising CSAT scores.  As a result, agents began abusing concessions available to them to pacify customers.  They weren't resolving the issues or reason for complaint at a higher rate. They were simply giving customers more free products and coupons for future purchases, while specifically telling them to be sure to give favorable scores on the email survey they would receive.  Not only did this behavior cause higher costs in concessions, but it also drove up repeat calls once the customers realized their issues were not really resolved over time.

Equally as dangerous in managing to metrics is the practice of self-reported metrics, which is the equivalent of the fox guarding the hen house.  Businesses need to have methods for ensuring efficiency and effectiveness, but it is possible for departments or groups to "game the system" in order to ensure good scores for rankings, ratings and even compensation.  Below are two examples of the negative impact of self-reported metrics.

  • Quality Monitoring - Team managers in support centers are rated by the quality of their agents.  Some support centers have a quality team, but many times the team managers are responsible for evaluating and coaching their own agents.  Low scores not only cause extra work for the manager by forcing them to coach the poor performers but also force managers to report the poor quality of their own team.  This can lead to falsely elevated quality scores, which can ultimately drives lower CSAT and issue resolution, as well as higher handle times and repeat calls when undesirable agent behaviors are not corrected or improved.  If agents are given false high quality scores, they will continue performing at the same level.  It is important to ensure that quality and customer satisfaction scores are correlated by coaching and managing to appropriate levels.
  • Issue Resolution - Often times IR is measured by agents reporting whether or not they believe the issue was resolved - not if the issue was actually resolved from the customer's perspective.  I've seen many instances of agents claiming that the issue was resolved when it was out of their scope of support and they were unable to resolve it, claiming that they should not be scored against something out of their control.  Again, the issue was not resolved and in the end will drive other metrics to be out of balance.

Design of a balanced scorecard ultimately is about the identification of a small number of financial and non-financial measures and attaching targets to them1.  This should include the measures and targets for both inputs (i.e. contact volume, staffing levels) and outputs (service levels, abandon rates, issue resolution).  Focus on balanced metrics can help a business quickly see where further analysis and improvements are need.

In order to collect unbiased feedback, care should be taken to ensure that everyone in the population being surveyed has an equal chance of being selected.  For example, only surveying customers whose issues have been resolved eliminates feedback from customers with unresolved issues, missing opportunities for improvement.  Also, when monitoring agent quality, sampling should occur at different times of the day.  Agent behaviors can vary throughout the day, so using random times of the day can net varying opportunities for coaching and feedback.

Ultimately, a balanced score card measured by unbiased methods is critical to ensure a true picture of how the business and individual teams are performing. Without focus on both, agents are driving to undesirable performance and the business could be making unfavorable decisions.

Share this post:

B in a circle

Sara Lisch