Skip to main content
Research in Brief Home

How Context Helps People Spot Fake News

Published: 2025

Abayomi Baiyere
Associate Professor & Distinguished Research Fellow of Digital Technology

Key Takeaways

  • Curated, high-quality evidence reliably improves veracity judgment for both true and false news claims — improving overall accuracy by 6–7 percentage points across studies.
  • Evidence quality matters: helpful evidence improved judgment, while irrelevant or merely related evidence can degrade it, making people worse at identifying true claims.
  • Priming a critical mindset — prompting people to think about truth and lies before reading — significantly boosted the effectiveness of uncurated evidence, nearly matching the impact of curated evidence.
  • Discursive evidence can change minds even against prior political beliefs: people updated their judgments based on helpful evidence regardless of whether the claim aligned with their partisan leanings.

Fake news is a growing threat, and existing countermeasures — such as warning flags and labels — have produced mixed results. Flags can backfire by triggering psychological reactance, and they don't provide people with the information needed to actually evaluate a claim. This study proposes a different approach: rather than telling people what to think, give them evidence to reason with.

The researchers introduce the concept of discursive evidence — contextual information that supports an individual's own judgment of whether a news claim is true or false, without imposing a normative verdict. They distinguish between curated evidence (an expert-produced summary from a fact-checking site, with the final rating removed) and uncurated evidence (raw search results generated by querying the news claim). Across three online experiments with U.S. participants recruited via Amazon Mechanical Turk, the team tested how each type of evidence affects people's ability to accurately assess both true and false news claims.

A third study took this further by manipulating the strength of the evidence itself — classifying individual evidence items as helpful (directly informative about a claim's veracity), related (topically relevant but not directly helpful), or irrelevant — to identify precisely which qualities make discursive evidence effective.

For platform designers and policymakers, this research offers a concrete, evidence-based alternative to normative flags. Instead of simply labelling a post as 'disputed' or 'false', platforms could provide users with a short excerpt of high-quality, non-judgmental evidence — letting them weigh the information and reach their own conclusion. This approach may not only be more effective than flags in some settings, but it might also be less likely to trigger the backlash that normative interventions often provoke.

The findings also carry a warning: not all evidence is equally beneficial. Poorly curated or irrelevant evidence can actually harm veracity judgment by activating heuristics — mental shortcuts — that lead users astray. This points to an ethical responsibility for anyone deploying discursive evidence at scale. Designing effective fake news interventions requires careful attention to evidence curation, not simply surfacing more content.

Finally, the priming result offers a low-cost, complementary intervention: prompting users to adopt a critical mindset before engaging with news can amplify the benefits of even uncurated evidence. This has potential implications for digital literacy programs and social media interface design, suggesting that brief contextual nudges toward reflective thinking could help meaningfully shift how people engage with information online.