Resources

Our Science

Learn how our analytics assess the business impact of poor Human Experiences of digital tools and services

Professor Jonathan Pitts (Prof)
Created by Professor Jonathan Pitts at Queen Mary, University of London, and built upon ten years of academic research, Actual Experience’s unique algorithms are what enable us to capture and analyse real-world human perception of using digital applications and services.

The Human Experience Score

The Human Experience score is like sending an opinion survey to all employees and customers every few minutes of every day. Only there is no survey.

Studies indicate that the market value of digital transformation in 2021 will exceed $400bn and will continue to grow at nearly 25% a year.

With increasing global reliance on digital services, coupled with a future of hybrid models of work, there is a greater need for business leaders to know precisely how digital tools are impacting their employees' ability to work and how to act on this information in order to improve.

The score is based on a scale between 0 and 100 and provides you with an accurate estimate of how your users are currently feeling about your digital service


Our Human Experience (HX) score is a simple metric, underpinned by significant analysis, which provides business leaders with a proxy for real world human perception of digital applications. Based on a scale of 0-100, we find it is the most effective way to quickly inform business leaders as to whether their employees are thriving or suffering and in need of support. 

The HX score acts as both the call to action for leaders in responding to employees with poor experiences, and as the benchmark for improvements from digital investment.


What makes up the HX Score?

Our proxy for real human perception of digital software applications is achieved by a combination of end-to-end measurements and perception-informed analysis of these measurements in order to derive the Human Experience score.

Standard Internet protocols are used for these lightweight end-to-end measurements, from which standard technical metrics are derived, such as round-trip time, packet loss, response time, content size and goodput.

A multi-dimensional approach

The score analysis reflects the fact that a user’s perception of an application is inherently non-linear, and multi-dimensioned. The primary dimensions are:

  • the perception of the speed with which information is delivered and rendered to the user,
  • the perception of any break-up in the flow of that information (caused by loss or late delivery),
  • the perception of the timeliness with which an application delivers a perceptually meaningful piece of information,
  • the perception of the level of detail in the information delivered,
  • and the perception of conversational interactivity that can be sustained by mutual exchange of information.

This multi-dimensional, non-linear analysis can be applied to any digital software application, and the dimensions are calibrated appropriately. For example, some applications are more interactive than others, and that is reflected in calibration of the score analysis for that particular dimension.

The differentiating factor

Other tools that set out to measure experience sit within the application performance management suite and deliver output based on actual user traffic, creating a mass of data that is then handed to APM experts to help them gain a picture of performance. These measurements typically describe the response time of an application only. Response time is a straightforward concept for engineers to understand and report on.

However, significant disadvantages accrue from using a one-dimensional metric. In the first instance - response time as a metric does not provide a reliable indicator of performance as perceived by users and fails to indicate whether a user is frustrated or otherwise by an application.

Furthermore, there is no explanation of what IT infrastructure behaviour or combination of system behaviours caused deteriorated performance, or when these behaviours occurred, or what behaviours will cause deterioration in performance in the future.

Ofcom - the experiment proving the theory

Actual Experience can now present a robust, repeatable, scalable methodology to understand consumer perceptions of Internet services and analyse issues across digital supply chains that affect service quality and those perceptions. This approach is applicable at large scale – to tens or hundreds of thousands of consumers; for multiple services of interest to consumers or business customers; for different methods of access including fixed line, mobile data and business-grade services; and for any region of interest both within the UK and globally.

Investigation of Internet Quality of Experience, Ofcom 2015

Over a period of two and a half years, Actual Experience worked with Ofcom to analyse consumer Internet services across the UK. A core part of that work was to evaluate the correlation between consumers’ real perceptions of digital experience and our Human Experience score analysis.

The consumer perception survey posed questions on overall experience (for early morning, daytime, evening, and night), on browsing, video and voice experience, and for standard and superfast/cable broadband packages.

The responses to survey questions informed the configuration of each deployed DU and how its analysed Human Experience scores were subsequently processed in the correlation analysis, in the following manner: surveyed users were asked about their application usage and when they mostly used each type of application; their DU was then configured to measure to targets they use, and only the analysed scores from the relevant periods were used in the correlation analysis. Any respondents whose DU usage averaged under 30 minutes per day were removed from the correlation analysis, in order to ensure a sufficient overall measurement volume was available.

For all but two subsets, the analyses demonstrated that the correlation is substantive (varying from moderate to strong) and statistically significant at the p<0.001 level. For the remaining two subsets (voice, and superfast/cable), the correlation is of moderate strength and statistically significant at the p<0.02 level. To express this in less formal terms, the level of confidence that these correlation results are not the outcome of chance is 99.9% for all but two subsets, and greater than 98% for all of the analyses.

Hence, the conclusion of the study is a very clear linkage between the HX score and the mean surveyed opinion for the full range of different conditions: low scores indicate that people are indeed frustrated with their digital experience, whereas high scores show people can get the job done because everything works well. Ofcom's own surveyed opinion data corroborated the analysis by Actual Experience in all areas investigated (the customer’s overall experience, at specific times throughout the day, for a range of digital applications, and across standard and superfast/cable broadband packages), confirming that the HX score is a reliable predictor of real human experience. The level of confidence that the correlation results are not the outcome of chance is at least 98%.

Our global patents

We have patents granted in the US, Europe and China.

US patent no. US9477573, European patent no. EP2457164, Chinese patent no. ZL201080033909.7