Why customer experience measurement is biased
My car recently came due for servicing, which ultimately delivered a rather interesting experience. I took the vehicle to the service center on the designated day, but several things went wrong, and I did not get it back as planned. Needless to say, I was anything but happy.
Why am I telling you this? Well, things got intriguing after I finally collected my vehicle. At that point, they informed me I had receive a survey, and could I be so kind as to provide a score of nine or 10 since anything else would be detrimental to the team?
What was I supposed to do? Report accurately on my experience while knowing it would impact someone, provide the score requested, or simply ignore the survey? I went with the last option.
Whether my choice was right or wrong is beside the point. The important thing is this episode made me realize something was terribly wrong with the way customer experience (CX) was measured.
When retail brands started measuring CX through emails or SMS, they thought they would be rewarded with several benefits.
For one, they would constantly monitor the pulse of their CX and react quickly to solve customer problems. Besides, CX conversations would start to happen across the organization and brands would have access to a benchmark.
Customers would also be rewarded as they would be offered a new way to highlight issues or pass compliments. And, to a certain extent, some of those benefits did materialize.
It was the time when some software vendors were claiming CX would improve if companies simply launched a CX measurement program – be it Net Promoter Score/NPS or something else, as long as it used their software – that encompasses those metrics across the organization.
However, the reality was way more complicated, and brands that followed this advice inevitably suffered disappointment.
Measuring something does not mean you fix it.
One only needs to take a close look at United States market data and the evolution of the ACSI (American Customer Satisfaction Index) since the mid-1990s, to see that, in reality, very little has changed. That is despite the amount of money spent on measuring CX.
So, what is happening?
1. In retail, the bulk of the feedback comes from customers who have bought a product or a service. It stands to reason that if you visit a store and end up buying something, you are pretty happy with your experience.
Let us also keep in mind that 90 percent of the customers entering a store leave empty-handed, so I would argue it is incredibly dangerous to take the feedback of the “happy” 10 percent and treat it as a representative of the overall CX you deliver.
2. Another issue we have is oversaturation where feedback requests are concerned. You cannot do anything nowadays without being asked for evaluation or comments.
As a result, the only people who end up providing feedback are either the brand aficionados or customers really unhappy with their experience. Therefore, the results are extremely polarized and fail to pinpoint anything but the most critical issues.
3. The story I shared at the beginning illustrates that front-line teams are not shy to ask customers for high scores. This situation is exacerbated when companies link a bonus to an NPS score.
4. Feedback can also be problematic when customers know theirs will be shared.
A number of studies have demonstrated that such knowledge immediately pushes customers into giving much higher scores. This bias results in an artificially inflated score of your CX.
5. Immediacy has become the norm. Teams are often bombarded with feedback and expected to react on the spot instead of being allowed to step back, reflect and devise a plan to address the root cause of the problem.
As a result, teams grow increasingly disengaged and critical of the tools in use.
6. It is also disheartening that the score has become the goal.
Provided the NPS is high, no one seems to care about the actual CX, and it hardly matters how you get there as long as you do. My car service experience demonstrates the type of behavior this promotes.
7. Last but not least, the human dimension often becomes underestimated, sometimes even completely ignored, when an IT solution is implemented.
Rx for CX?
Past and present transgressions aside, the fact remains that improving CX is more important than ever, and measuring it is a must.
To succeed, brands need to realize that buying some software with all sorts of bells and whistles is not the solution but only a part of it.
To begin with, brands need to put in place not one but several methodologies to capture CX: a voice-of-the-customer (VOC) survey underpinned by solid software is important, but far from enough.
Regularly interviewing both buyers and non-buyers as they exit the store is incredibly powerful, as are several other methodologies.
Brands also need to have clarity on the mechanism that will help their teams leverage the data and transform them into actions. This goes beyond calling back an unhappy customer: the aim must be to foster behavior that will exert a positive and memorable impact on the experience.
This is precisely where many initiatives fall short.
The data is available, but not used to drive change in the organization. In this area, technology alone will not be enough to do the job.
Finally, when a retail excellence Program is launched, frontline teams need to get drawn into the conversation early on.
AT THE END of the day, you need their engagement if they are to embrace the program you intend to roll out.
Anything less poses the risk of their making a travesty of your plans.
Article by Christophe Caïs, CEO of Customer Experience Group