Monthly Archives: August 2016

Viewing social media data from a qualitative perspective

When it comes to mining social media for information, I’ve observed that it’s quite unusual for folks from the business world to consider alternatives to “big data” approaches.   So I was delighted to see that, in this recent article from the Harvard Business Review, the authors argue that qualitative approaches to social media listening can generate new insights for companies.

This resonates with what I’ve been saying for the last couple of years.  When I’ve had opportunities to speak to, or work with, companies in the travel industry for example, I have been advocating a “small data” approach to the analysis of user-generated online reviews.  There is no question that  big data approaches can be useful in revealing large, general patterns of consumer behavior.  However, like the authors of this HBR article, I have found that close, careful, qualitative text analysis can yield very different types of contextually-relevant insights into consumer experience and sentiment.

Ratings and Reviews…not the same thing.

It’s been years since I’ve watched the film version of the Hitchikers Guide to the Galaxy, and the only scene I remember is the one in which a group of humans ask a supercomputer to provide the answer to the “ultimate question of life, the universe, and everything.”   After much computational activity, the supercomputer responds: “42.”  I think that this is a brilliant illustration of the limits of quantification.  We want to believe in the importance of numbers and statistics, but we often fail to consider how those numbers are generated, what they actually represent, and how they might be variably interpreted.  And let’s face it, reducing complex realities into a single number is… well, pretty reductive.

David Streitfield´s article “Online Reviews? Researchers Give Them a Low Rating “ takes the perspective that online reviews aren’t very useful, but actually, his article concentrates far more on the related phenomenon of ratings, rather than reviews. “Rating” refers to a numeric score assigned from a given scale, such as 1-5 or 1-10.   In the context of Yelp or Amazon, this number is used to  quantify a subjective evaluation of experience. “Reviews” in contrast, refers to the more qualitative, narrative texts, which are also used to evaluate a subjective experience.  Obviously, ratings are less informative than reviews, because assigning a numeric score to a multifaceted experience such as dining in a restaurant or reading a novel inevitably means reducing a whole LOT of different kinds of information and opinions into a single integer.

Many reviewers are fully aware of the challenges involved in assigning a single numeric score to a multidimensional consumer experience.  As I described in my book,  The Discourse of Online Reviews, some reviewers even go so far as to explain the logic and calculations behind their ratings.   Here’s one of my favorite examples from a  Netflix review of the film, Shallow Hal:

I give it a bunch of stars for being funny, I take away a bunch of stars for being hypocritical, I give it some more stars for trying to deliver a good message, and then take away a few stars for it continuing to be hypocritical.  In the end it averages out to 3 stars.

Furthermore, sites like Amazon and Yelp often provide an composite star rating for each product or service, which represents the average of ALL of the ratings assigned by multiple individuals.  How meaningful are these composite ratings?  Especially when it comes to “experience goods,”  such as novels or movies, where people’s tastes vary so widely?  On a related note, even though I’ve been studying hotel reviews on  Tripadvisor for nearly a decade now, I still can’t tell you how much of a difference there is between a hotel with a rating of 4.6 and a hotel with a rating of 4.7.

So while  I wholeheartedly agree with many of the points made in Streitfield´s article, in order to more accurately reflect his content, the linguist in me wants to change his title to “Online Ratings? Researchers Give them Unfavorable Reviews.