Chris does Content - the website of Chris Worth, technology copywriter

Academic paper: Engines of Understanding

The published paper

EXECUTIVE SUMMARY: Many papers exist comparing the performance of one software application with another. The authors of this White Paper wanted to explore a more realistic scenario – how Rapide’s customer feedback analysis software performed against the most common method of analysis in the real world:  employees doing it manually.

Rapide’s software, called Rant&Rave, takes unstructured text – in this case, 3 sets of approximately 1000 customer comments each – and reduces it to quantifiable form, “understanding” the linguistics of each comment, “scoring” the sentiment, and deriving relationships between ideas. Its output, in addition to these quants, is a SWOT report that extracts examplar comments identifying Suggestions and Risks.

In this paper, we compare the output of Rant&Rave with the output of 3 human analysts on the same 3000-comment dataset. Data supported these findings:

Quantitative findings

  • Rant&Rave falls within the range of human critical judgement when assigning categories to comments, intelligently categorising unstructured natural language comments into a finite set of subjects. Analysis showed that human analysts categorised the same datasets into largely the same major categories in largely the same proportions.
  • Rant&Rave used the full set of categories when analysing each comment, “understanding” variation between similar categories and assigning more categories on average. This led to more detailed outputs than human analysts, who tended to “adopt” a smaller set of broad categories.
  • Rant&Rave accentuates both positive and negative sentiment compared to humans, who had a higher tendency to give neutral scores. This has the useful effect of emphasising differences, enabling easier decisionmaking by managers using its output.
  • Rant&Rave has lower variance than human analysts. The software would give the same outputs if presented with the same data again. By contrast, the variance in our 3 human analysts tended to rise with the emotional content of the data; highly emotional comments led to much higher variation in their scores.
  • Rant&Rave works on complete datasets many times larger than any human analyst was able to work on; the maximum dataset analysed by any human comprised 250 comments, whereas Rant&Rave’s was 1114.

Qualitative findings

  • Rant&Rave’s SWOT output is based on quantitative findings rather than “gut feeling” of human analysts. SWOT outputs from the humans were brief, general, and mainly without supporting data.
  • Rant&Rave’s SWOT output requires further human analysis. By extracting a list of “most relevant” comments as Suggestions and Risks, it does not judge the value of a comment, leading to low-value Suggestions simply because there were more of them. Rant&Rave’s outputs need the final application of human judgement and critical thinking.
  • Human analysts displayed “observer bias” that coloured their outputs. Behavioural factors such as over-generalising, comfort zones, and fatigue led to them using fewer categories and less differentiated scores.
  • Human analysts were affected by emotionality of comments. The variance in their categorising and scoring covaried approximately with level of emotion in the text; customer complaints and negatives were harder for the humans to score consistently.

Return on Investment

  • The cost per unit time of Rant&Rave was far lower. Human analysts took 100+ hours in total to analyse 750 comments each; Rant&Rave took a few minutes for the full dataset of 3000+ comments. A business receiving 1000 comments a month would save 0.9 FTE of human resources using Rant&Rave.
  • The opportunity costs of using humans for large-dataset analysis are reduced by Rant&Rave. All human analysts in this project were university graduates with technological and analytical skills better used in creating business value. If the average employee on this level contributes £100,000/yr to turnover, Rant&Rave saves an NPV close to £1m for every 10,000 customer comments received.
  • Rant&Rave enables faster and more accurate management decisionmaking by applying detailed analysis, avoiding human variability, and providing quantitative support for its output. This equips managers with information required to optimise economic “x-inefficiency”, or the extent to which current activities prevent more profitable activities.

Who should read this paper

This report is for marketing, PR, and product managers faced with a stream of unstructured customer feedback – whether on its web forums, its suggestion boxes, comment postcards, or surveys – who may be interested in a means of analysing that data effectively.

(i) Rant&Rave software site (now Upland Software).

(ii) Based on the speed of categorisation observed in this project: 1 analyst took 33 hours to analyse 25% of the total dataset, or 132 hrs/month, approximately 1 full-time equivalent employee.

(iii) Rough figure derived from turnover of company divided by number of employees; many UK companies yield a figure close to £100,000.