Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
LawNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
BOOM LabsNo Image is Available
Deepfake TrackerNo Image is Available
VideosNo Image is Available

Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
LawNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
BOOM LabsNo Image is Available
Deepfake TrackerNo Image is Available
VideosNo Image is Available
Media

We Need To Get Better At Covering Studies About Fake News

Coverage of a seminal and important study comes short if it doesn’t clarify the nature of the sample of a study — and what this means for our capacity to generalize.

By - Alexios Mantzarlis | 12 March 2018 8:29 PM IST

 

For much of its existence as an explicit form of online journalism, relatively few academics were interested in fact-checking. No longer.

 

Over the past few years, a growing number of scholars in the fields of politics, computer science, psychology and communication have turned their attention to the effect and reach of misperceptions and their corrections. This week, some of the most active in this field published in Science magazine a manifesto for future interdisciplinary work.

 

Reactions to another study that appeared in this week’s issue of Science highlight the role of the media in accurately disseminating the findings of all this work — and how they often fail.

 

In “The spread of true and false news online,” Soroush Vosoughi, Deb Roy and Sinan Aral, all at the Massachusetts Institute of Technology, studied a huge sample of tweets about fact-checked claims published over the course of more than a decade.

 

Vosoughi et al. find that stories rated “False” spread faster and wider than those rated “True.” This echoes findings of a smaller 2015 study by Andrew Guess, now at Princeton University, and the analysis that Craig Silverman, now at BuzzFeed News, had conducted while running emergent.info.

 

Most coverage of the MIT findings reduced them to a simplistic dichotomy of the “truth” versus false news. Here are a few examples:

 

  • The BBC headlined its article “Fake news ‘travels faster,’ study finds.” It went on to write in no uncertain terms that false stories “reached more people than the truth.”
  • PBS ran a Scientific American story under the headline ”False news travels 6 times faster on Twitter than truthful news.”
  • For Reuters the story was “False news 70 percent more likely to spread on Twitter” with the body of the article adding that “false news was about 70 percent more likely to be retweeted by people than true news.”

 

The problem is the study sample explicitly concentrates on what The Atlantic’s Robinson Meyer called “contested news.” A more accurate formulation, if undoubtedly clunky, would have been “fact-checked news found to be true” and “fact-checked news found to be false.”

 

The researchers rightly shied away from making their own determinations of the veracity of online content, leaning instead on the findings of six fact-checking and debunking websites (some well-known, others less so). That is obviously not the full universe of real and false news, something co-author Sinan Aral confirmed on Twitter.

 

Stories that no one has fact-checked — because their truthfulness is not presumably up for speculation — were not part of the study’s sample. Without being able to quantify them specifically, we should still be able to safely presume that these constitute the lion’s share of all “real news.”

 

More research on “fake news” is fundamental, especially as the problem is increasingly attracting regulatory attention. Any action that seeks to dramatically alter the spread of information at a systemic level should be grounded in solid and public evidence.

 

Yet too often the evidence that is starting to trickle out from academia is being trivialized and distorted. Simplified, then exaggerated.

 

As I’ve pointed out in the past, I’m not immune from the tendency to simplify in headlines. Fact-checkers have rightly called me out for my own shortcomings.

 

At the same time, coverage of a seminal and important study comes short if it doesn’t clarify the nature of the sample of a study — and what this means for our capacity to generalize.

 

Collectively, we need to get better. We should be drawing many small lessons about misinformation from these new studies. Instead, we are hammering our audiences with an inaccurate generalization — that fakery is rampant and undefeatable.

 

In the long run, this message will further increase distrust and disaffection in our online information ecosystem and prevent us from taking academically-sound small steps toward a shared solution.

 

 

This story was originally published on Poynter and has been republished with permission.

 

 

Tags: