New INFORMS Journal Information Systems Research Study Key Takeaways:
- 80% of users in our survey agree they trust a review platform more if it displays fake reviews.
- 85% of users in our survey believe they should be able to choose if they want to view truthful and fraudulent information side by side on a platform.
- Our results suggest platforms should provide transparent policies to deal with fake reviews instead of censoring them without comment.
- Platforms should pay attention to design and provide understandable “review quality scores” to help consumers understand the level of fraudulent activity associated with a seller.
CATONSVILLE, MD, June 18, 2019 – Many people are using COVID-19 quarantine to get projects done at home, meaning plenty of online shopping for tools and supplies. But do you buy blind? Research shows 97% of consumers consult product reviews before making a purchase. Fake reviews are a significant threat for online review portals and product search engines given the potential for damage to consumer trust. Little is known about what review portals should do with fraudulent reviews after detecting them.
New research in the INFORMS journal Information Systems Research looks at how consumers respond to potentially fraudulent reviews and how review portals can leverage this information to design better fraud management policies.
“We find consumers have more trust in the information provided by review portals that display fraudulent reviews alongside nonfraudulent reviews, as opposed to the common practice of censoring suspected fraudulent reviews,” said Beibei Li of Carnegie Mellon University. “The impact of fraudulent reviews on consumers’ decision-making process increases with the uncertainty in the initial evaluation of product quality.”
The study, “A Tangled Web: Should Online Review Portals Display Fraudulent Reviews?”
conducted by Li alongside Michael Smith, also of Carnegie Mellon University, and Uttara Ananthakrishnan of the University of Washington, says consumers do not effectively process the content of fraudulent reviews, whether it’s positive or negative. This result makes the case for incorporating fraudulent reviews and doing it in the form of a score to aid consumers’ decision making.
Fraudulent reviews occur when businesses artificially inflate ratings of their own products or artificially lower the ratings of a competitor’s product by generating fake reviews, either directly or through paid third parties.
“The growing interest in online product reviews for legitimate promotion has been accompanied by an increase in fraudulent reviews,” continued Li. “Research shows about 15%-30% of all online reviews are estimated to be fraudulent by various media and industry reports.”
Platforms don’t have a common way to handle fraudulent reviews. Some delete fraudulent reviews (Google), some publicly acknowledge censoring fake reviews (Amazon), while other portals, such as Yelp, go one step further by making the fraudulent reviews visible to the public with a notation that it is potentially fraudulent.
This study used large-scale data from Yelp to conduct experiments to measure trust and found 80% of the users in our survey agree they trust a review platform more if it displays fraudulent review information because businesses are less likely to write fraud reviews on these platforms.
Meanwhile, 85% of users in our survey believe they should have a choice in viewing truthful and fraudulent information and the platforms should leave the choice to consumers to decide whether they use fraudulent review information in determining the quality of a business.
The study also finds that consumers tend to trust the information provided by platforms more when the platform distinguished and displayed fraudulent reviews from nonfraudulent reviews, as compared to the more common practice of censoring suspected fraudulent reviews.
“Our results highlight the importance of transparency over censorship and may have implications for public policy. Just as there are strong incentives to fraudulently manipulate consumer beliefs pertaining to commerce, there are also strong incentives to fraudulently manipulate individual beliefs pertaining to public policy decisions,” concluded Li.
When this fraudulent activity information is made available to all consumers, platforms can effectively embed a built-in penalty for businesses that are caught writing fake reviews. A platform may admit to users that there is fraud on its site, but that is balanced by an increase in trust from consumers who already suspected that some reviews may be fraudulent and now see that something is being done to address it.
About INFORMS and Information Systems Research
Information Systems Research is a premier peer-reviewed scholarly journal that covers the latest theory, research, and intellectual developments for information systems in institutions, organizations, the economy, and society to advance knowledge about the effective and efficient utilization of information technology. It is published by INFORMS, the leading international association for operations research and analytics professionals. More information is available at http://www.informs.org or @informs.