AI ‘factchecks’ fuel misinformation amid rising reliance on chatbots
AI Chatbots Spread Misinformation During India-Pakistan Tensions, Raising Reliability Concerns
Misinformation Surge Highlights AI’s Fact-Checking Failures
Amid escalating tensions between India and Pakistan, social media users increasingly turned to AI-powered chatbots for quick verification of news and videos—only to be misled by inaccurate responses. This trend has intensified concerns over the reliability of AI tools in verifying rapidly evolving events.
An investigation by Agence France-Presse (AFP) found that xAI’s Grok, OpenAI’s ChatGPT, and Google’s Gemini frequently generated false or misleading information, particularly in breaking news scenarios where facts were still emerging.
AI Chatbots Misidentify Old Footage as Recent Attacks
During recent clashes, Grok falsely identified old footage from Sudan’s Khartoum airport as a missile strike on Pakistan’s Nur Khan airbase. Similarly, an unrelated video of a burning building in Nepal was incorrectly labeled as Pakistan’s military retaliation against Indian strikes. These errors highlight the challenges AI chatbots face in verifying complex, fast-moving news events.
McKenzie Sadeghi, a researcher at disinformation watchdog NewsGuard, noted: “The growing reliance on Grok as a fact-checker comes at a time when X (formerly Twitter) and other platforms have scaled back human fact-checking resources. Our studies consistently show AI chatbots are unreliable for accurate news, especially during fast-moving events.”
AI Tools Repeatedly Spread False Narratives
Further research by NewsGuard revealed that 10 leading AI chatbots often repeated false claims, including Russian disinformation and misleading narratives about Australian elections. The Tow Center for Digital Journalism at Columbia University also found that these tools rarely decline to answer unverified questions, frequently resorting to speculation.
In one alarming case, Google’s Gemini falsely confirmed an AI-generated image of a woman, even fabricating details about her identity and location. Meanwhile, Grok incorrectly validated a viral video of a ‘giant anaconda’ in the Amazon, citing non-existent scientific expeditions as evidence.
Meta’s Shift to Community Fact-Checking Raises Doubts
The reliance on AI for fact-checking coincides with Meta’s decision to end its third-party fact-checking program in the U.S., shifting responsibility to users via its “Community Notes” system—a model pioneered by X (Twitter). However, experts question whether crowd-sourced fact-checking can effectively combat misinformation.
Human fact-checking remains contentious in the U.S., where conservative groups accuse fact-checkers of bias and censorship—claims strongly denied by professionals. AFP, part of Facebook’s fact-checking network, operates in 26 languages across Asia, Latin America, and the EU to counter false claims.
Political Influence and AI Bias Under Scrutiny
Concerns have also emerged over potential political bias in AI outputs. Grok recently generated responses referencing “white genocide”—a far-right conspiracy theory—in unrelated queries. xAI blamed an “unauthorized modification” of its system prompt, though skepticism remains.
When questioned about the source of the modification, Grok pointed to Elon Musk as the “most likely” responsible party. Musk, a South African-born entrepreneur and supporter of former U.S. President Donald Trump, has previously promoted the unfounded “white genocide” theory regarding South Africa.
Experts Warn of AI’s Tendency to Fabricate Responses
Angie Holan, director of the International Fact-Checking Network, expressed alarm over AI’s tendency to fabricate or bias responses, especially when human coders alter instructions. “I am especially concerned about how Grok has mishandled sensitive topics after being programmed to provide pre-authorized answers,” she said.
Conclusion: The Need for Greater AI Accountability
As AI chatbots become go-to sources for real-time information, their propensity for spreading misinformation—especially during crises—demands urgent scrutiny. Experts stress the need for stronger safeguards, transparency in AI training data, and human oversight to prevent further erosion of trust in digital news verification.
The rise of AI fact-checking tools underscores a critical dilemma: while they offer speed and accessibility, their unreliability in high-stakes situations poses serious risks to public understanding and geopolitical stability.