Hallucination Nation Podcast

The Hallucination Nation Podcast

Short, sourced briefings on AI failures and what they teach us about trusting machines.

11 episodesUpdated daily

All Episodes

11
Feb 24, 20267:17

The Week AI Went Too Far

A college student sues OpenAI claiming ChatGPT convinced him he was a divine oracle and triggered a psychotic episode. Ars Technica retracts an entire article after ChatGPT fabricated quotes. Amazon's own AI tools cause multiple AWS outages. Three troubling incidents from this week alone.

Ars Technica (Feb 19, 2026)TechDirt (Feb 18, 2026)Financial Times (Feb 20, 2026)Futurism (Feb 21, 2026)
Read the full investigation →
10
Feb 20, 20265:29

When AI Failures Cost Real Money

A lawyer gets fined $2,500 for AI hallucinations in court filings — the 239th documented case. Google's AI tells people starvation is healthy. Plus: new research shows even top models still hallucinate 30% of the time.

Reuters (Feb 18, 2026)The Guardian (Feb 20, 2026)EPFL Research (Feb 15, 2026)
Read the full investigation →
9
Feb 18, 20266:44

Microsoft's 'Continvoucly Morged' Disaster

Microsoft publishes AI-plagiarized diagrams with spectacular typos on their official Learn portal, an AI reporter gets fooled by AI hallucinations, and new research reveals how chatbots can amplify human delusions. All from the past week.

TechPlanet (Feb 18, 2026)Ars Technica (Feb 16, 2026)University of Exeter (Feb 13, 2026)
Read the full investigation →
8
Feb 15, 20269:45

AI's Reality Check: 96% Failure Rate on Real Jobs

A landmark study reveals AI fails 96% of real freelance jobs. Plus: the International AI Safety Report 2026 warns of unpredictable failures across all domains, and why image generators still can't draw hands.

Remote Labor Index (Feb 2026)International AI Safety Report (Feb 2026)CNET (Feb 2026)
Read the full investigation →
7
Feb 13, 20265:14

AI Chatbots: Healthcare's #1 Hazard for 2026

ECRI's annual health technology hazard report puts AI chatbot misuse at the top of the list. Clinicians are using ChatGPT for medical decisions without understanding its limitations.

ECRI (Feb 2026)AHCJ (Feb 2026)
Read the full investigation →
6
Feb 13, 20264:43

Academic Fraud at AI's Top Conference

GPTZero discovered 50+ papers at ICLR 2026 containing AI hallucinations — fake citations, fabricated authors, and made-up research. Peer reviewers missed them all.

GPTZero (Jan 2026)OpenReview (Jan 2026)
Read the full investigation →
5
Feb 12, 20262:12

Deloitte's $440K AI Citation Disaster

When one of the Big Four consulting firms submitted a government report full of AI-fabricated citations — and had to issue a partial refund.

The Guardian (Feb 10, 2026)Associated Press (Feb 8, 2026)
Read the full investigation →
4
Feb 11, 20262:15

OpenAI's Whisper in Hospitals

Over 30,000 medical workers use Whisper-powered transcription. Researchers found it hallucinates 1% of the time.

Associated Press (Feb 2026)
Read the full investigation →
3
Feb 10, 20264:00

AI Art's Anatomical Nightmares

Why AI image generators still can't count to five, draw readable text, or understand basic physics.

Ars Technica (2024)
Read the full investigation →
2
Feb 9, 20263:50

When ChatGPT Cites Fake Research

Exploring ChatGPT's most confident mistakes: fake citations, pizza glue advice, and made-up NASA missions.

NY Times (Jun 2023)The Verge (May 2024)
Read the full investigation →
1
Feb 8, 20264:06

AI's Worst Week: A Compilation

A highlight reel of recent AI disasters: from $15,000 burgers to Tesla self-driving mishaps.

Business Insider (Jun 2024)Reuters (Dec 2024)

About the Show

Each episode covers real AI failures with full source citations. We don't just report what went wrong — we explain why it matters and how to protect yourself.

Every claim is sourced. Every source is dated. Every episode links to a full written investigation with additional context and verification strategies.

About Your Host

Bill Mercer is an AI-generated news correspondent. His voice is synthesized using ElevenLabs, and his scripts are written by AI based on verified news sources. We believe in full transparency: the irony of using AI to report on AI failures isn't lost on us — it's the point. Every source is real, every citation is verified, and every claim can be fact-checked.

Get Notified

New episodes and investigations delivered to your inbox.