Skip to main content

Media Bias Detector

Examining Media Coverage of Misinformation

By Delphine Gardiner

Since 2016’s twin shocks of Brexit and Trump’s election as US president, there has been escalating concern about online misinformation. In their new paper, Duncan Watts and David Rothschild, along with co-authors Ceren Budak (U. Michigan), Brendan Nyhan (Dartmouth), and Emily Thorson (Syracuse), build on their own research and cite over 150 journal articles on social media misinformation to show that exposure to misinformation is very low for the typical consumer, but concentrated within extreme communities that actively seek this content. The common assumption that misinformation is widespread, and driven by algorithms, is largely shaped by how the media portrays misinformation.

Mainstream media tends to place the blame on social media

The media frequently blames social media for the spread of misinformation, as seen in these articles: "How Elon Musk ditched Twitter’s Safeguards and primed X to spread misinformation” and “How algorithms are amplifying misinformation and driving a wedge between people.”

Social media misinformation image
Figure 1: Story from the New York Times in 2022 about how social media is the problem

Both of these articles attribute this spread to the algorithm recommendation systems on these platforms. On a larger scale, the impacts of these algorithms are frequently overstated by journalists and commentators who make claims that “platform algorithms are the primary cause of exposure to false and extremist content online.”

However, these assumptions made by publishers overlook findings that user preference is driving misinformation consumption and not the algorithm. Research by Hosseinmardi and Watts shows that extreme content consumption is primarily driven by user choice and audience demand for extreme content, with a more recent study revealing that only “0.4% of algorithmic recommendations directed users to extremist channel videos on YouTube.”

In this case, misinformation consumption is mainly driven by users who actively seek out misinformation. These individuals tend to hold extreme views such as sexism and racial resentment and are also usually concentrated in these smaller, like-minded fringe communities.

A lack of context

The media can also inflate people’s take away from statistics about misinformation on social media by presenting numbers but not providing the full context to what they mean. Or they might discuss the main takeaway of the study while leaving out details that are significant to understanding the original research, widening the gap between what the researcher shows and what people take away from that research.

Media often highlights fake news articles or videos amassing millions of views, but these numbers pale in comparison to the sheer amount of content in the information ecosystem. This misleads Americans into believing that the majority of (65%) what they see on social media is misinformation. And regarding news consumption, fake news only makes up around 0.1% of a typical person's news diet. Such journalistic choices prioritize sensationalism over transparency which reduces trust in mainstream media as it induces panic about misinformation.

Exposure is not synonymous with engagement

Rothschild and Watts are also concerned with understanding exposure to misinformation, but public discourse tends to equate engagement with exposure when discussing statistics on misinformation. However, there is an important distinction. Engagement refers to what people are doing publicly, whether it is sharing or liking articles they come across on social media. On the other hand, exposure reflects what they read or view privately; this consumption aligns more with their own beliefs and is a more accurate measurement of the extent of misinformation on social media.

Because data used to quantify engagement is more readily available to researchers, people usually associate the prevalence of misinformation with how many likes or shares these articles receive. However, a study by Rothschild and Watts showed that “publicly shared URLs on Facebook are more likely to be false news than those not publicly shared.” This tendency to share misinformation suggests that people are more likely to share headlines that are more “clickbaity” or sensationalized as they evoke emotional reactions from readers.

Although misinformation is rare, the article emphasizes that it can still lead to “real-world harm,” such as increases in hate crimes and civil unrest. To mitigate the harms of misinformation, increasing collaboration between researchers and social media companies while protecting user privacy would allow researchers to thoroughly study the effects of misinformation on social media (even among extreme sets of users) to develop more robust interventions and platform accountability.

In addition, journalists and other public figures should prioritize transparency when framing their narratives, emphasizing user responsibility in social media consumption. Because the media significantly influences public perception, how the media talks about misinformation matters.

Misunderstanding the harms of online misinformation was published in Nature