Every business consumes, produces, stores and uses data. The stats are mind boggling. According to IBM, there’s currently in excess of 2.7 zettabytes of data in the digital universe, and we’re creating an additional 35 zettabytes every year.
This data offers us huge potential across all aspects of our business. But it needs to be sifted and sorted, analysed and understood. Too much raw data and we’re overwhelmed. But the alternative - reports compiled of averages that don’t paint the whole picture - means we can miss the subtleties of what’s really happening.
This is particularly true when it comes to assessing human experience - the feeling a person has when interacting with the digital world.
The systems performance data that most network and application monitoring tools collate use averages across set time periods - measurements might be taken once every five minutes or once a day. But these numbers don’t express a whole start-to-finish conversation and the way that the human being involved in that digital process has experienced it.
The thing is, people aren’t like machines. Where you have a machine to machine relationship, averaged numbers are helpful because those machines are predictable within certain parameters. With a human to machine experience and particularly a human to human one, you have to factor in an understanding of people - their emotions, their behaviours, their psychology.
When you have a person interacting with a system or using a system to connect with another person, it’s not good enough to say, “The process worked.” You can’t look at a graph and understand how a person felt about their experience.
The process may have worked technically - for example, an employee may have been able to create an internal report and log it on the system. But if that employee found it frustrating to use the system because it was glitchy and only worked intermittently then you have an impact. An impact that you can’t quantify with averages.
Why does it matter if this employee is frustrated? Well, they might get distracted from the task because it’s taking too long, reducing their focus. If they start multitasking, their efficiency and productivity is likely to drop.
They might get so annoyed that they decide to start looking for a new job with a company whose systems actually work properly. And we all know the cost of replacing an employee (it is around one third of their annual salary - so £15,000 for someone earning £45,000 a year).
And what if the process they’re completing is a customer facing one, for example a form used by a customer service operative in a call centre? Then you’ve got reputational damage too.
The customer is frustrated. Brand loyalty is impacted. They might, if the problem persists, withdraw their business, impacting revenue. As we all know, customer acquisition is a lot more difficult - and expensive - than customer retention.
Bringing this back to technical data and averages, what’s important to remember is that people do not operate on a linear model. A decrease of 10% in call quality won’t automatically lead to a 10% drop off in people using the system. It might cause a drop off of 5% for the first decrease in quality, a 30% drop off for the second, and by the time you get to the third, everyone’s gone.
So just knowing what your average latency is over a 24 hour period really doesn’t give enough information about whether the people using your systems are actually having a good experience or not.
Technical data is vital and averages serve a purpose, but to impact on a strategic level - to direct how we keep our teams motivated and our customers engaged - we need to understand, monitor and improve human experience too.