Human beings have always loved reviews: word of mouth has long been regarded as one of the most valuable marketing tools available to a company. But the internet has made us all review mad. We love giving reviews, and we have grown to depend on them, too.
Researchers have found that more than 99 percent of consumers read reviews online before they shop, and nearly three quarters of consumers say they've left an online review for a local business in the past year. Not only do we read a lot of reviews before we purchase anything these days, we also believe them: 49 percent of consumers say they trust reviews as much as they do personal recommendations from friends and family members. And younger people are particularly fond of reviews: 91 percent of 18 to 34-year-olds say they trust online reviews every bit as much as they do personal recommendations.
This isn't really surprising: the internet has expanded our product offerings a dizzying amount, and we need information about the mountain of goods and services before we buy them. Reviews are an obvious solution. The problem is that a lot of those reviews are fake. And, worse, we humans are not very good at spotting which ones are real and which are bogus.
Shabnam Azimi and Alexander Krasnikov of Loyola University of Chicago and Kwong Chan of Northeastern University recently published a study about how fake reviews manage to fool us.
Azimi says they started from a point that had already been well established by previous studies, that we humans are generally not good at detecting deception. And that's when we're face-to-face with each other. Trying to tell when we're being conned is even more difficult when we're using text.
The study used a dataset of 1600 reviews of Chicago hotels. Some of them were real; others were fake. The reviews were presented to 400 subjects: each subject got eight reviews to read: a balanced set of two positive fake, two positive real, two negative fake and two negative real, presented in a random order. The reviews were written by real people who were given information about the hotel and told to work it into the review.
"This is what happens in real life," Azimi says. "The writers of those fake reviews, sometimes they're hired by companies. They get paid a high amount of money and they have to sound pretty real." And not just to write a glowing review of a product or a hotel. "You can always hire somebody to write a negative review about your competition," she points out.
The results were illuminating, showing that consumers generally trust negative reviews more than positive ones. Moreover, we humans appear to have a bias towards assuming that positive reviews might be fake, Azimi says.
"Overall, negative reviews are less common than positive ones. So, that's why we pay more attention to them. So it's a totally different processing of positive versus negative information. But when a negative (review) is fake, we get fooled by it."
When asked why they thought a bad review was trustworthy, participants revealed that they assumed that a negative diatribe about a company would be more truthful than a glowing endorsement. They didn't fully account for the possibility that the reviewer might want to harm the business.
Many of us believe that we can spot liars. There is a world of literature and research devoted to the study of the "signals of deception": gaze avoidance, fidgeting, restless foot and leg movements and so on. There's the Paul Ekman hypothesis of facial micro-expressions as indicators of lies. There are numerous professional courses offered, claiming that they can train a person to spot a liar.
But there is no scientific support for any of these hypotheses or approaches. In fact, the science suggests that our ability to detect lying vs. truthful witnesses is mediocre, at best. And that's when we're face to face with someone. So how do we stand a chance when we're reading something online, and we aren't able to see a person's mannerisms or expressions?
It could be that without those distractions, we might do better at identifying fakes. There is a theory that it's easier to determine whether someone is telling the truth when one reads the account of what they say, rather than seeing them say it.
Azimi, Chan and Krasnikov's study suggests that we're no better with text than we are in person, although the liar's tools may be different when he or she is writing, as opposed to talking.
When it came to faking a review, length was important to believability, as was detail. A long, negative review of a hotel, complete with lots of information, tended to convince participants. A lengthy, positive review, on the other hand, was regarded as suspicious, and participants tended to trust writers that kept their glowing reviews short.
Emotion was also important in convincing readers — or the lack of emotion, at least. Azimi says study participants tended not to trust reviews where the writers expressed their feelings in a big way. The more dispassionate that negative write-up, the more likely it was to take the reader in.
Other keys to a convincing review were the fluency of the writing, and the readability of the text. With a positive review, the more the test read like an ad, Azimi says, the less likely the participant was to believe it. Typos and grammatical errors, meanwhile, tended not to sway people either way.
Finally, the study authors wanted to see whether there was a certain type of person that was more susceptible, or more capable of detecting fakes. So they selected participants that conformed to the Big Five personality types: extroversion, agreeableness, openness, conscientiousness and neuroticism. It turns out that people who display openness, and tend to be adventurous and intellectually curious, are better at spotting fake reviews than other personality types. Extroverted people, on the other hand, tend to have a harder time identifying a fake review.
The fake reviews written for Azimi's study were put together by humans, but increasingly, fake reviews are being written by machines. In the past, these bogus endorsements or critiques have been relatively easy to spot, but programs like ChatGPT and other neural networks are now being used to generate realistic reviews that can swamp a business's website.
Many companies that host reviews, like Amazon, Tripadvisor or Expedia use algorithms to weed out fake reviews. But Azimi points out that the machines are programmed by humans, and given our inability to sort the wheat from the chaff when it comes to fake reviews, this doesn't bode well.
The conclusion? When it comes to reviews, it's wiser to be skeptical. We can't be sure whether a machine wrote that review, or, if it was a human, whether they're telling the truth. We can't trust them. Unfortunately, it seems, we can't trust ourselves, either.