Tech

ChatGPT Cites Elon Musk’s Grokipedia as Source Multiple Times, Report Finds

The rivalry between Elon Musk and Sam Altman has taken an unexpected turn, moving beyond social media jabs and into the very data sources that power modern AI systems. A new report suggests that OpenAI’s ChatGPT has begun citing Grokipedia, a knowledge platform owned by Musk’s xAI, raising fresh questions about AI reliability, information sourcing, and the future of online reference material.

According to a report by The Guardian, the latest model powering ChatGPT, GPT-5.2, has referenced Grokipedia multiple times when responding to a range of factual queries. The development is striking, given the highly publicised tensions between Musk and Altman and the broader competition between their respective AI ecosystems.


What the Report Found

The Guardian’s analysis found that ChatGPT cited Grokipedia nine times across responses to more than a dozen questions. These queries spanned sensitive and complex topics, including political structures in Iran and discussions around Holocaust denial.

The findings suggest that Grokipedia is being treated by large language models as a legitimate reference source, similar to how Wikipedia has long been used as a foundational knowledge base for both humans and machines.

Grokipedia

This trend is not limited to OpenAI. The report also noted that Anthropic’s chatbot, Claude, referenced Grokipedia in responses related to topics such as petroleum production and Scottish ales. That suggests Grokipedia’s reach is expanding across multiple AI platforms, not just those tied to OpenAI.


What Is Grokipedia and Why It Matters

Grokipedia was launched in October 2025 by xAI as a new, AI-generated alternative to Wikipedia. Unlike Wikipedia, which relies on human editors, citations, and community review, Grokipedia is entirely powered by large language models.

The appeal is obvious. AI-generated encyclopaedias can be updated faster, cover more topics quickly, and require far fewer human contributors. For AI systems trained to ingest vast amounts of digital text, Grokipedia fits neatly into their information ecosystem.

However, this efficiency comes with risks. Large language models are known to “hallucinate” — generating content that sounds plausible but is factually incorrect. When such systems are used to create reference material, the risk of errors being repeated and amplified increases significantly.


From Wikipedia to AI-Generated Knowledge

For years, Wikipedia has served as one of the most trusted open knowledge sources on the internet. Its strength lies not just in its content, but in its process: human editors debate, verify sources, and correct mistakes over time.

Grokipedia represents a fundamentally different approach. Instead of human moderation, it relies on AI systems that themselves depend on training data drawn from the web. When AI systems then cite AI-generated content, critics warn of a feedback loop where errors reinforce themselves.

The fact that ChatGPT and Claude are already referencing Grokipedia suggests that this shift from human-curated to AI-curated knowledge may be happening faster than expected.


Concerns About Misinformation

The growing visibility of Grokipedia has revived long-standing concerns about misinformation in AI systems. Shortly after Grokipedia’s launch, Jimmy Wales publicly criticised the idea of using LLM-powered tools for factual reference.

Wales warned that language models are not yet reliable enough to write encyclopaedic entries without significant human oversight. He argued that while AI can assist with drafting or summarising information, it should not replace rigorous editorial processes.

His concerns now appear more relevant as AI chatbots increasingly treat AI-generated sources as authoritative.


An Ironic Twist in the Musk–Altman Feud

There is a certain irony in OpenAI’s ChatGPT citing a Musk-backed platform at a time when Musk is one of OpenAI’s most vocal critics. Musk has repeatedly accused OpenAI of abandoning its original mission and prioritising commercial interests.

Yet, despite that rivalry, the underlying AI systems appear to be converging in their information sources. This highlights a key reality of the AI era: once information enters the digital ecosystem, it becomes difficult to control how and where it is used.

The feud may be personal and corporate, but the data flows are increasingly interconnected.


Why AI Models Are Turning to Grokipedia

One reason Grokipedia may be gaining traction is accessibility. AI-generated platforms can structure information in ways that are easier for other AI systems to parse. They are often written in a uniform style, free of the editorial debates and inconsistencies that exist in human-edited encyclopaedias.

For developers, this makes such platforms appealing as machine-readable sources. Over time, that technical convenience could outweigh concerns about accuracy, unless safeguards are put in place.


What This Means for Users

For everyday users, this development underscores the importance of understanding how AI systems generate answers. When a chatbot cites a source, it does not necessarily guarantee that the source has been independently verified or reviewed by humans.

As AI tools become more deeply embedded in education, journalism, and decision-making, the quality of their sources will matter as much as the sophistication of their models.

Users may need to become more critical consumers of AI-generated information, especially on complex or sensitive topics.


A Turning Point for Online Knowledge

The emergence of Grokipedia as a commonly cited source marks a potential turning point in how knowledge is created and consumed online. If AI-generated encyclopaedias become dominant, the role of human editors and traditional fact-checking could diminish.

Whether this leads to faster access to information or a decline in reliability will depend on how companies like OpenAI, xAI, and Anthropic address transparency and accountability in their systems.


Conclusion

The discovery that ChatGPT is citing Elon Musk’s Grokipedia highlights how quickly the AI information landscape is evolving. What was once an experiment is now influencing mainstream AI responses, even as experts warn about the risks of relying on LLM-generated knowledge.

As AI systems increasingly learn from each other, the boundary between source and output grows thinner. The challenge ahead will be ensuring that speed and scale do not come at the cost of accuracy and trust.

Click Here to subscribe to our newsletters and get the latest updates directly to your inbox.


Leave a Reply

Your email address will not be published. Required fields are marked *