With all that is available to us, how do we decide which products, news stories and political candidates are worth our time, money and support? In the Internet age, we increasingly rely on the opinions of others as expressed through online ratings.
A recent study at the Hebrew University of Jerusalem shows that the popular opinion of crowds is biased and easily manipulated by ratings.
To quantify how social influence affects online decision-making, researchers in Israel and the United States designed a large-scale experiment on a popular news aggregation website similar to Digg.com and Reddit.
Related articles
- Study On ‘Love Hormone’ Highlights Difference In Social Perception Between The Sexes
- Study: Employees Most Honest At The Beginning Of The Work Week
The collaborative website’s users contribute news articles; write comments in response to articles and “up-vote” or “down-vote” others’ comments. This produces a general rating for each posted comment equal to the number of up-votes minus the number of down-votes.
“As new communication and information processing technologies assume a more dominant role in our decision-making, this research has implications for electoral polling, stock market prediction, product recommendation and many other areas. In order to interpret collective judgment more accurately and make better use of collective intelligence, we need to adapt online rating and review technologies to account for social influence bias,” explains Dr. Lev Muchnik, one of the leading researchers of the experiment.
Get the ball rolling
Over five months, the researchers arbitrarily distributed over 100,000 comments submitted on the site to one of three groups: a positively manipulated group (up-treated), a negatively manipulated group (down-treated), and a control group (neither up-treated nor down-treated).
To illustrate the natural voting tendencies on the site, they artificially up-voted (thumbs up) 4,049 comments upon their creation and down-voted (thumbs down) 1,942 others. This influenced users to continue either up-voting or down-voting the comments, depending on the artificially-made votes. The experimental comments were then viewed over 10 million times and rated over 300,000 times by other users.
Sign up for our free weekly newsletter
SubscribeThe study found that up-treating a single vote created significant bias in the rating behavior of subsequent users: the single random up-vote when a comment was created triggered herding effects that increased comments’ final ratings by 25 percent in relation to the control group comments.
Positively manipulated comments were also much more likely to accumulate exceptionally high scores: a full 30% more likely than those in the control group to reach or exceed a score of 10.
Curb the enthusiasm
The research showed that while up-treating had a significant effect; down-treating produced final ratings that were not statistically different from the control group. The researchers explained that despite negative artificial ratings generating negative bias in subsequent user ratings, they also provoked a countervailing phenomenon of the users fixing the inadequate negative score, therefore neutralizing the effect of social influence.
Based on the results, the researchers concluded that social influence significantly biases rating dynamics in systems designed to harness collective intelligence. While negative social influence is compensated for by crowd correction, positive social influence accumulates, creating a tendency toward ratings bubbles.
The study appeared in the journal ‘Science’ on August 9th 2013. The experiment was conducted by Dr. Lev Muchnik at the Hebrew University of Jerusalem’s School of Business Administration, Prof. Sinan Aral at the MIT Sloan School of Management and Sean Taylor at the NYU Stern School of Business. It was supported by Sinan Aral’s Microsoft Faculty Fellowship and NSF Career Award.
Photo: Confused young man using a notebook in his living room by Bigstock
Facebook comments