
The Hallucination Nation Podcast
Short, sourced briefings on AI failures and what they teach us about trusting machines.
All Episodes
The Week AI Went Too Far
A college student sues OpenAI claiming ChatGPT convinced him he was a divine oracle and triggered a psychotic episode. Ars Technica retracts an entire article after ChatGPT fabricated quotes. Amazon's own AI tools cause multiple AWS outages. Three troubling incidents from this week alone.
When AI Failures Cost Real Money
A lawyer gets fined $2,500 for AI hallucinations in court filings — the 239th documented case. Google's AI tells people starvation is healthy. Plus: new research shows even top models still hallucinate 30% of the time.
Microsoft's 'Continvoucly Morged' Disaster
Microsoft publishes AI-plagiarized diagrams with spectacular typos on their official Learn portal, an AI reporter gets fooled by AI hallucinations, and new research reveals how chatbots can amplify human delusions. All from the past week.
AI's Reality Check: 96% Failure Rate on Real Jobs
A landmark study reveals AI fails 96% of real freelance jobs. Plus: the International AI Safety Report 2026 warns of unpredictable failures across all domains, and why image generators still can't draw hands.
AI Chatbots: Healthcare's #1 Hazard for 2026
ECRI's annual health technology hazard report puts AI chatbot misuse at the top of the list. Clinicians are using ChatGPT for medical decisions without understanding its limitations.
Academic Fraud at AI's Top Conference
GPTZero discovered 50+ papers at ICLR 2026 containing AI hallucinations — fake citations, fabricated authors, and made-up research. Peer reviewers missed them all.
Deloitte's $440K AI Citation Disaster
When one of the Big Four consulting firms submitted a government report full of AI-fabricated citations — and had to issue a partial refund.
OpenAI's Whisper in Hospitals
Over 30,000 medical workers use Whisper-powered transcription. Researchers found it hallucinates 1% of the time.
AI Art's Anatomical Nightmares
Why AI image generators still can't count to five, draw readable text, or understand basic physics.
When ChatGPT Cites Fake Research
Exploring ChatGPT's most confident mistakes: fake citations, pizza glue advice, and made-up NASA missions.
AI's Worst Week: A Compilation
A highlight reel of recent AI disasters: from $15,000 burgers to Tesla self-driving mishaps.
About the Show
Each episode covers real AI failures with full source citations. We don't just report what went wrong — we explain why it matters and how to protect yourself.
Every claim is sourced. Every source is dated. Every episode links to a full written investigation with additional context and verification strategies.
About Your Host
Bill Mercer is an AI-generated news correspondent. His voice is synthesized using ElevenLabs, and his scripts are written by AI based on verified news sources. We believe in full transparency: the irony of using AI to report on AI failures isn't lost on us — it's the point. Every source is real, every citation is verified, and every claim can be fact-checked.
Get Notified
New episodes and investigations delivered to your inbox.