Real or Fake News? Let Statistics Help: 7 Questions to Ask
May 12, 2017
There is a lot of discussion today about whether the stories we see in the news are real or fake. Statistical thinking can help you assess the validity of reports, claims from a new study, or other conclusions flashing through your social media feed. Here are a few tips from statisticians – experts in the scientific discipline of learning from data – for how to separate fact from fiction, science from salesmanship, precision from propaganda.
Is the source trustworthy?
Data are gathered and reported on a nearly constant basis by organizations ranging from major academic institutions to small independent groups. If the source of the data is unfamiliar to you, take a few minutes to dig into its background. If the source is cited in an academic or government publication, chances are, it’s credible.
Does the source have an interest in a particular result?
Look closely to determine what organizations conducted and funded the effort. If the primary source of the data has a financial or reputational stake in promoting the findings, it’s possible that the way in which the research was designed, conducted and reported may be influenced. While this is worth investigating, it may not necessarily be a reason to automatically discount the findings. If a possible financial stake is identified, find out if the paper/study has been independently reviewed or if it appears in a credible publication. Check to see if the results have been reproduced or replicated. It would also be helpful to know what, if any, reaction was expressed by the professional community and how other research by the authors has been received.
How were the data collected?
There are multiple methods for collecting the data we see in news reports, including surveys, interviews, focus groups, experiments, randomized clinical trials, and observational studies. Understanding the methods used to collect the data in a news report can provide insight into the validity of the findings. Consider the following questions: Did a survey sample people at random (versus a survey taken by visitors to a website)? Was the methodology explained and clearly communicated? Be wary of surveys with low response rates, surveys with only responses by self-selected volunteers, small sample sizes in any study, and claims of causality.
Why the margin of error matters?
No study or survey can perfectly represent an entire population. So, researchers identify and survey or study smaller sample populations to gather data that can be reflective of the whole. To account for potential discrepancies between the results from the sample and reality of the whole, statistical reports specify a margin of error, usually in the form of a percentage. Smaller margins of error generally mean greater confidence that the data represent the whole population. For larger margins of error, the data provide a less than complete and accurate picture. Keep in mind that margin of error only refers to sampling. There may be other sources of uncertainty and variability.
What is the context?
Explosive headlines naturally draw clicks, views and shares, as do data reports showing dramatic spikes in this number and drops in that one. News articles or blog posts do not always explain the deeper context behind the data they report, but it is not necessarily difficult to uncover. Looking for other news reports of the same findings will likely yield additional information. Reviewing the complete survey or study results should also give an indication of the degree to which the full context has been reported. Further, find out what population in which the study results apply.
Are the findings supported?
If a story in the media cites new research that is counter to previous research, it may be worthwhile to examine the results of similar studies or polls to determine what may have caused a different outcome. Did the numbers truly change? Were other dynamics at play that influenced the results? Also, inquire about who else has reviewed the research results. When independent researchers, not affiliated with the work, are quoted about its findings, their voices provide objectivity, and help convey that the findings are legitimate.
Do the charts and graphs make sense?
People are visual creatures, and media thrive on compelling images including charts, graphs and other data visualizations. Illustrations can often provide helpful perspective to a story, but they can be misleading and inaccurate. Remember, accurate visual portrayals of data are just as important as the validity of the data itself. If visualizations are not prepared by someone with experience translating data into visuals or someone involved in the study, it’s possible that the images may be inaccurate. This problem is more common than you might think. Quartz and Flowing Data address the prevalence of misleading illustrations.
Additional resources for fact checking are available through Sense about Science USA, Annenberg Public Policy Center’s FactCheck.org, the Google Digital News Initiative’s Factmata, and The Sunlight Foundation. Journalists looking for rules of thumb for covering stories with numbers can find a useful guide from the Royal Statistical Society. And, of course, additional resources for understanding statistics and how the science of learning from data can be found right here at www.thisisstatistics.org including a compelling interview with BuzzFeed’s data editor, Jeremy Singer-Vine.
You may also be interested in these books:
- News and Numbers by Vector Cohn
- Stat-Spotting: A Field Guide to Identifying Dubious Data by Joel Best
- Statistical Rules of Thumb by Gerald Van Belle
This spring, This is Statistics launched a new contest: March Randomness, a month-long challenge that encouraged students to test their probability intuition skills, inspired by the Borel board game. The inaugural March Randomness contest drew in 214 participating teams. How it worked: Every Monday through Thursday throughout March, we posed a new probability experiment to students – for…
It’s almost time to announce the winner of the This is Statistics March Randomness spring contest! During this inaugural competition, teams of high school and undergraduate students used their probability intuition to predict outcomes of 16 simple random experiments that used dice, balls, cards, and coins. Before we reveal the winning team later this week,…