TL;DR
Five notable cases over the past two years reveal how AI hallucinations have led governments to publish inaccurate or fictitious information. These incidents underscore the need for human oversight in AI applications within official documents.
Five instances over the past two years have shown how AI hallucinations—fabricated or inaccurate outputs from AI systems—have caused embarrassment for governments worldwide. These incidents involve false citations, fictitious research, and unreliable references in official documents, raising concerns about AI oversight and accountability.
In April 2025, South Africa withdrew its Draft National Artificial Intelligence Policy after it was discovered that at least six of 67 sources cited were AI hallucinations, including fictitious academic journals. The government attributed this to unverified AI-generated citations, marking the first time a government has retracted a document due to AI errors.
Similarly, in May 2025, the Trump administration released a report on children’s health containing incorrect references, some with URLs suggesting AI involvement. White House officials dismissed these as formatting issues, but corrected versions followed shortly after.
In August 2025, Australia’s Department of Employment and Workplace Relations faced scrutiny after a Deloitte report included fake references. Deloitte confirmed that AI tools had produced inaccurate citations, leading to the government receiving a refund of $290,000 out of $440,000 paid for the report. Canada’s Newfoundland and Labrador government also had a report reissued after discovering fake citations caused by Deloitte’s AI use.
Europe’s cybersecurity agency ENISA admitted that two of its threat reports from 2025 contained 26 incorrect sources out of 492 footnotes, raising concerns over the unchecked use of AI in sensitive official publications. Experts warn that the lack of verification processes allows AI hallucinations to become embedded in institutional knowledge, risking credibility and trust.
Why It Matters
These incidents highlight the growing risks of relying on AI for producing official or semi-official documents without sufficient human oversight. The errors undermine public trust, pose accountability challenges, and could have serious policy or security implications if unchecked. They also reveal the urgent need for strict verification protocols and transparency in AI deployment within government agencies.
AI citation verification software
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
Background
Over the past two years, several governments and institutions have increasingly integrated AI tools into their document creation and research processes. While AI offers efficiency, these cases demonstrate the potential pitfalls when AI outputs are accepted without rigorous human validation. The incidents are part of a broader debate about AI’s role and reliability in official settings, especially as AI-generated content becomes more sophisticated and widespread.
“There will be consequence management for those responsible for drafting and quality assurance.”
— Solly Malatsi, South Africa’s Minister of Communications and Digital Technologies
“ENISA let AI touch the one layer it should never touch unguarded: the truth layer.”
— Chiara Gallese, AI law and data ethics researcher
AI document proofreading tools
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
What Remains Unclear
It remains unclear how widespread AI hallucinations are across other government documents and whether new protocols are being implemented to prevent future errors. The long-term impact on policy-making and public trust is still developing, and the full extent of AI’s unreliability in official contexts is not yet known.

Agentic AI Troubleshooting Guide: Fixing Loops, Hallucinations, and Failures in Autonomous Systems and Workflow Agents
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
What’s Next
Authorities are expected to introduce stricter verification procedures for AI-generated content and increase human oversight. Future reports and policies will likely undergo more rigorous checks, and agencies may establish clear guidelines for AI use to prevent similar incidents.
AI reference validation tools
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
Key Questions
How common are AI hallucinations in government documents?
While these five incidents are well-documented, the true prevalence remains uncertain. Experts warn that AI hallucinations could be more widespread but underreported due to lack of detection or acknowledgment.
What measures are governments taking to prevent AI errors?
Some governments are implementing stricter review processes, requiring human verification of AI outputs, and updating procurement policies to disclose AI use and assess risks more thoroughly.
Could AI hallucinations have serious consequences?
Yes, especially if false information influences policy decisions, security assessments, or public health directives. Ensuring accuracy is critical to maintaining trust and effective governance.
Are AI companies responsible for these errors?
In many cases, the responsibility lies with the organizations deploying AI tools, which must ensure proper oversight. Some companies have acknowledged their role in generating inaccurate outputs.