False reports progress 6 times quicker proceeding Twitter than genuine reports



An analysis of posts on Twitter by three million people between 2006 and 2017 shows that fake news spreads significantly more than the truth on social media.
Sinan Aral and his colleagues at Massachusetts Institute of Technology (MIT) followed the spread of 126,000 stories on Twitter. A tweet was considered a story if it asserted a claim, meaning that it didn’t have to be linked to any particular story from a news organisation. The claims were then fact-checked by six independent organisations, including Snopes, Politifact and FactCheck.
“What we found was scary,” says Aral. “False news travels farther, faster, deeper and more broadly than the truth in every category of information – many times by an order of magnitude.”
Truthful tweets took six times as long as fake ones to spread across Twitter to 1,500 people – in large part because falsehoods in the sample were 70 percent more likely to be retweeted than the truth, even after accounting for account age, activity level and their number of followers. The most viral fake posts were political in nature.

Image Source – Pixabay

Don’t blame it on the bots

Despite the belief that armies of bots are sowing discord and spreading information, it is people, rather than automated accounts, most likely to share incorrect information. Aral and his colleagues analysed the diffusion of information with accounts they identified as bots both included and removed. Although bots did spread fake news, they also shared true news at the same rate.
People share disinformation for a variety of reasons but strong emotional responses – including surprise and disgust –make people more likely to share the fake news. “To me, marry those together and you get the dictionary definition of outrage,” believes Vian Bakir of Bangor University, who has researched fake news. “Fake news has been optimised to generate that.”
Something else worth bearing in mind is the motivation of people who share certain news items. “Some people share not because they think it’s true, but because it’s something their network would want to hear,” says Bakir.
Journal reference: Science, DOI: 10.1126/science.aao4960

What about Facebook?

Image Source – Pixabay

Despite their focus on Twitter, the MIT researchers say their findings likely apply to other social media as well. It is difficult to know for sure because Twitter is one of the few platforms that share the relevant data with the public.
“There needs to be more cooperation between the platform makers and independent researchers, such as those from MIT,” says David Lazer, a professor of political science and computer and information science at Northeastern University who is familiar with-but did not participate in the MIT Twitter study.
The ability to investigate more platforms is crucial to understanding the scope of social media’s false-news problem. Studies show more people get their news from Facebook than they do from Twitter, but it is difficult to say which site is more vulnerable to manipulation, Lazer says.
On Twitter people are more likely to be exposed to a wider variety of users with different agendas, he says. “On Facebook, you have people who are more likely to know one another sharing information, so it is possible the purpose of sharing is less to deceive than it would be on Twitter,” Lazer adds. Facebook declined to comment for this article.
“Facebook is clearly the 800-pound gorilla in this conversation, but they have been much less transparent than Twitter,” says Matthew Baum, a professor of global communications at Harvard University’s Kennedy School of Government. “Twitter matters, of course, and we can still learn a lot by studying dissemination patterns on that platform. But at the end of the day, you’re going to have to find a way to work with Facebook.”
Baum says he and Kennedy School colleagues are preparing to also study the potential role of platforms beyond social media, including WhatsApp and other direct-messaging tools.

False versus Fake

Baum and Lazer are part of a team that co-authored a separate article in the science-this week about the impact of false and misleading information spread online, and potential ways to intervene against it. Unlike the MIT researchers-who avoided saying “fake news” and called the term “irredeemably polarized”-Baum, Lazer and their colleagues embraced it.
There has been much debate over the phrase, “because Donald Trump and others have chosen to weaponize it,” Lazer acknowledges. “We share those concerns, but also realize any term describing this problem could be similarly weaponized.”
Baum adds that, given the inherent ambiguity of the language involved-including terms such as fake news, false news, misinformation and disinformation-they preferred to use the words that so many people have come to associate with the problem.
Whatever the problem is called, solutions remain elusive, especially at a time when fact-checking sites themselves are often accused of bias. “People don’t like to be told that they are wrong, so they tend to find a way to counterargue their points even if they’ve been debunked-and then attribute bias to the fact-checking site that disagreed with them,” Baum says.
Another problem is fact checking requires resurfacing false claims in order to debunk them, and people often remember the false information without recalling the context in which they read it. For that reason, Baum adds, “we have to find the best modality for fact-checking, including where and how to present it.”
This article is reproduced from Scientific American. It was first published on Mar. 8, 2018. Find the original story here.


Please share your comments on this topic.


You can find this post on my website www.technetstreet.com

Comments

Popular posts from this blog

Hopefully, publishers are looking Pinterest after Facebook Newsfeed changes.

Coming Together - Facebook, WhatsApp & Music Videos