Summary: Exploring generative AI in the misinformation Era: Impacts as a misinformation source and fact-checker on belief in the information

Whether misinformation is labeled as AI-generated or from general internet sources does not significantly influence how credible or believable people perceive it to be.

ARTICLES

8/14/20252 min read

This study looks at how people perceive and trust information online, especially when it comes to false or misleading claims about health topics like breastfeeding or heart attacks. The researchers wanted to see if labeling content as coming from AI (like ChatGPT) makes people trust it more or less. They also checked if trusting AI or knowing about AI influences how people judge the accuracy of the information.

Purpose and Focus:

  • To understand how AI influences perceptions of misinformation, especially regarding its role as a source and as a fact-checker.

  • To examine whether labeling content as AI-generated affects its credibility.

  • To explore individual differences such as AI trust and literacy in shaping perceptions.

Methodology:

  • Multiple experiments involving online participants from Prolific.

  • Participants assessed misinformation related to health topics (e.g., breastfeeding, baby’s sex, heart attacks).

  • Conditions manipulated: presence/absence of AI labels, type of fact-checker (human vs. AI), and source credibility cues.

  • Additional measures included trust in AI, AI literacy, demographic variables, and issue involvement.

Key Findings:

  1. Perception of AI as a Source:

  • Unlike earlier research, participants did not view AI-generated misinformation as less credible compared to non-AI sources of misinformation. Participants’ belief levels were not significantly lower or higher for misinformation attributed to AI. These results suggest that AI, as a source label itself, does not influence people’s perceptions of misinformation.

  • As of 2024, the distinction between AI-generated content and traditional web content has blurred, with widespread familiarity leading to equivocation.

    1. Impact of AI Trust:

  • Personal trust in AI significantly influenced belief in AI-based fact-checkers, especially for individuals highly involved with health issues.

  • Participants with high AI trust were more likely to accept AI-verified information about misinformation.

  • Human fact-checkers were found to be perceived as more trustworthy than their AI counterparts especially health and nutrition issues, confirming negative machine heuristic.

    1. Role of AI Literacy:

  • AI literacy did not significantly moderate perceptions or beliefs.

  • Widespread familiarity with AI might have led to overconfidence or overestimation of personal AI knowledge, diminishing perceived differences.

    1. Role of the Content Context:

  • The study focused on health and nutrition topics, which are emotionally and socially salient.

  • This context may make negative heuristics (e.g., distrust of AI in sensitive domains) more prominent, but overall, the influence of source cues was minimal.

    1. Changing Public Perceptions:

  • The findings reflect a shift in how AI is integrated into information ecosystems, viewed increasingly as just another source rather than a distinct or more trustworthy one.

  • This contrasts earlier studies that showed AI source cues strongly affected credibility judgments.

Theoretical Implications:

  • The concept of machine heuristics (machines are more secure and trustworthy than humans) may be less applicable now, as AI blending into the information environment reduces distinctiveness.

  • Public perceptions of AI's credibility have evolved due to familiarity, potentially leading to source blurring.

  • The assumption that AI labels significantly influence perception needs updating in the context of widespread AI exposure.

Practical Implications:

  • Communication Strategies: Simply labeling content as AI-generated may no longer be effective for influencing trust or credibility perceptions.

  • Design of AI Tools: Emphasizing transparency and accuracy is crucial since users might not differentiate sources based on labels alone.

  • Countering Misinformation: Trust in AI-based fact-checkers can be leveraged, especially among highly involved or trusting individuals, but more effective methods may involve addressing overall trust and literacy.

  • Future Campaigns: Given the blurred boundaries, promoting AI literacy and critical thinking remains essential to equip users with tools to evaluate content beyond source labels.

    To read the full research paper, click here https://www.sciencedirect.com/science/article/pii/S073658532500070X?casa_token=j6HFXShK038AAAAA:Iwa77fiBBx1O7yWnSqmhyS_BeHhsoWDOUkRdZSQh731lGMc29roHVN2_KvymVl3oU5D_3P7DEgM

    * This is a series on the blog where we summarize research papers on misinformation. Most people who aren’t in the academic field consider research papers didactic and boring to read, so we are simplifying research papers for easy comprehension.