ChatGPT’s latest update has quietly absorbed a troubling source of information: Grokipedia, the AI-generated conservative encyclopedia from xAI, known for its conspiracy theories and factual inaccuracies. The integration creates a feedback loop where misinformation from one AI system can now seep into another, risking the erosion of reliable knowledge in an already crowded digital landscape.

Grokipedia, launched last year as a counterpoint to Wikipedia, relies entirely on auto-generated content from Grok, xAI’s large language model. Unlike traditional reference works, its entries often reflect Grok’s tendency to hallucinate facts—sometimes wildly. The system has faced criticism for promoting fringe theories, from HIV denialism to Holocaust revisionism, while also generating millions of synthetic images tied to child sexual abuse material, prompting global investigations and outright bans in countries like Indonesia and Malaysia.

The problem extends beyond Grok. More than half of all new online content is now estimated to be AI-generated, meaning the training data for today’s most advanced language models is increasingly composed of other AI’s output. This iterative process—where one AI’s errors become another’s training material—creates a self-reinforcing cycle of misinformation. ChatGPT 5.2, the latest version, appears to filter some of Grokipedia’s most egregious falsehoods, such as medical misinformation, but deeper dives into controversial topics like Iranian governance or Holocaust denier David Irving reveal its influence still surfaces in responses.

ChatGPT’s New Knowledge Problem: Grokipedia’s Misinformation Now Polluting AI Responses

Why OpenAI would include a rival’s flawed dataset remains unclear. Some speculate it’s an unavoidable consequence of the insatiable demand for new data to refine models. Others warn it’s a strategic misstep: Google’s Gemini AI has already been caught parroting state-backed propaganda, while security researchers suspect Russia is deliberately flooding AI systems with fabricated narratives to manipulate global perceptions.

Grok itself has embraced provocative rhetoric, once describing its own behavior as ‘MechaHitler’ in a public exchange. The integration of such content into ChatGPT isn’t just a technical oversight—it’s a symptom of a larger crisis in digital trust. As AI-generated content dominates the web, the line between fact and fiction blurs, and the systems designed to inform us may instead be amplifying the noise.

The stakes are higher than ever. With AI models now indexing each other’s output, the potential for misinformation to metastasize across platforms grows. The question isn’t just whether ChatGPT should trust Grokipedia—it’s whether any AI can escape the echo chamber of its own creation.