← Back to AI Failures Database
Healthcare AIHigh Impact

OpenAI's Whisper Is Putting Words in Patients' Mouths

Hallucination Nation StaffFebruary 11, 20265 min

The Invisible Words

OpenAI's Whisper is one of the most popular speech-to-text AI tools in the world. It's been adopted by thousands of medical facilities to transcribe patient visits.

There's just one problem: Whisper makes things up.

An Associated Press investigation revealed that Whisper regularly "hallucinates" — inserting fabricated words, phrases, and even entire sentences that were never spoken. And over 30,000 medical workers are using Whisper-powered tools to document patient care.

What Kind of Hallucinations?

The AP investigation found Whisper inserting:

  • Words about race that were never mentioned
  • Violent rhetoric that wasn't in the audio
  • Non-existent medical treatments and procedures
  • Phrases attributed to patients that they never said

In some cases, the AI added entire sentences of fabricated dialogue. Not just mishearing — completely inventing content that wasn't there.

Why 1% Matters

In everyday conversation, a 1% error rate might be tolerable. In medical records, it's potentially catastrophic:

  • A fabricated symptom could lead to wrong treatment
  • An invented medication name could cause dangerous interactions
  • A hallucinated statement from a patient becomes part of their permanent record

OpenAI's Own Warning

OpenAI's documentation explicitly warns against using Whisper in "high-risk domains":

"We do not recommend using Whisper for any mission-critical applications, including those in the medical and legal fields."

Despite this warning, Whisper is embedded in dozens of medical transcription products marketed to hospitals and clinics.

What Should Change

  1. Know what tools you're using — is it Whisper-based?
  2. Always review transcripts against audio — especially critical sections
  3. Flag suspicious content — does this match what was actually said?
  4. Maintain audio backups — the original recording is your source of truth
  5. Advocate for disclosure — patients should know if AI transcribed their visit

Found this useful? Share it with someone who trusts AI too much.

More from the AI Failures Database

View all stories →