You had it 20 years ago: doctors spoke into recorders, transcriptionists turned that into notes, the docs reviewed them.
The first study I cited replaces the "spoke into recorders" stage with non-AI voice recognition.
The second study replaces the "spoke into recorders" stage with LLM voice recognition, and... crucially... also replaces the educated transcriptionist step with nothing.
I imagine that the real problem is that the voice recognition can be classic or LLM and it just doesn't matter as much as having two humans in the loop instead of one. But that's not a story which gets you to replace cheap voicerec with expensive AI.
A pretty insightful viewpoint I heard recently from a doctor friend: doctors and hospitals believe that only a corporation could possibly implement this, so they fall into the SaaS trap and lose data sovereignty.
Under the hood, a lot of the companies are Llama or Gemma wrappers connected to whisper.
=> the error rate was 7.4% in the version generated by speech recognition software, 0.4% after transcriptionist review, and 0.3% in the final version signed by physicians. Among the errors at each stage, 15.8%, 26.9%, and 25.9% involved clinical information, and 5.7%, 8.9%, and 6.4% were clinically significant, respectively.
=> Omissions dominated error counts (83.8%, p<<0.001), with CAISs varying widely in error frequency and severity, and a median of 1–6 omissions per consultation (depending on CAIS). Although less frequent, hallucinations and factual inaccuracies were more often clinically serious. No tested CAIS produced error-free summaries.
On the gripping hand, people who work in the management end of the US healthcare industry can't be trusted with healthcare or information security to begin with.
My dad likes to joke around and his doctor uses some kind of transcription service. Time for fun!
His doctor asked him about using drugs and he made a joke that was something like "I only use coke" - meaning coca-cola. Of course his doctor knew he was kidding about drinking too much soda because he eats/drinks too much sugar. So they had a little laugh and moved on.
BUT now it's in his medical transcripts. My mom said it "transcribed" it as something like "the patient responded he has used cocaine recently".
I guess his doctor doesn't go in and actually fix things or even read over what the transcription says...
Also both of my parents have accents and have reported really weird transcriptions that don't match what they actually said.
So now my mom has told my dad he can't make jokes with the doctor anymore because even if the doctor knows he's joking it's going to get noted down as a "fact".
This feels like a compelling reason to joke around more.
If inaccuracies make it to your patient record, it's defamatory. Your doctor must sign off on the transcript and if they're letting through poor results, make it their problem to fix. That'll either force the tech to get better or to fall back on better note taking practices.
And that's what makes it actionable defamation. If your doctor signs off on an AI summary that accuses you of being an drug dependant sex worker, that's serious malpractice.
Might be immature but personally once I knew this was possible I'd go for the high score. Try to get every substance I can think of listed plus a supposed admission of murder and whatever other ridiculous stuff I can come up with.
"Well you know me doc, I keep my drugs in the deep freezer with the bodies waiting for disposal so I'm quite confident in their shelf life." I wonder what an AI scribe would make of such a remark.
I've ended up with an erroneous medicine allergy on my record because I mentioned a well-known side effect to that medicine during an office visit a couple years ago. Some "moving part" in the system (be it a human entering the doctor's notes, a transcriptionist, etc) interpreted what I said as an allergic reaction and now I get asked about that "allergy".
I've asked to have it fixed but other facilities have gotten "copies of my records" and I've had it crop up in visits to other providers.
Thankfully it's not a medicine that's likely to ever be administered to me (or not administered when I'm incapacitated and can't point out the error) so I'm not worried, practically. On principle, though, it really frustrates me. It seems like it will never be fixed.
Imagine if his health insurance premiums got raised because of it, if he loses a job opportunity due to background checks or if he gets arrested because of it. Even going through customs or getting a visa can be tricky with a history of cocaine on your record.
For now. With all that is happening in the US I wouldn't be surprised if medical records will become public for law enforcement and immigration.
I'm here in Europe on a private health plan, my blood results go straight to my insurance company. Wouldn't be surprised if my premiums got adjusted if my cholesterol goes up.
Since the late 90s, the US has been continually moving the opposite way of what you are suggesting. You are hearing about it because people have been demanding changes to the way it used to be.
I wonder how it changes the calculus when medical data is leaked into the public domain then hoovered-up by data brokers.
Is a law being broken by a data broker if a credible case can be made that the data was publicly available?
I would think the leaking party would be subject to action, but does the "taint" of the data being private somehow get "washed away" if it becomes publicly available? Asked another way, is a party who consumes illegally-leaked but publicly available data also on the hook for privacy regulations.
It's only illegal until someone in power decides it isn't. Anyone watching the US over the past year should know that by now. (And anyone who has lived under a repressive regime or a country that has slid into autocracy or fascism already knows this well.)
I have plenty of chemical dependency medical records, it has had zero impact on me at all (the records, not the chemical dependency). Heroin and alcohol.
Your medical records can only be viewed if you approve access, and employers are not allowed to ask for medical records. Foreign countries can’t see your medical records when you apply for a visa.
Possibly it could impact life insurance if you need to turn over medical records, but my life insurance policy was written after my drug abuse days so I don’t think it would matter.
my father has cardiac issues, serious ones. When a doctor asks what he wants to do he routinely says "Sail around the world, solo!" because that's about the stupidest most risky thing a person with a bad heart could consider.
So now every single doctor reads the transcript and starts with saying "I think it'd be really poorly advised for you to keep considering your worldwide solo voyage."
AI summarization doesn't carry the tone well. Most any but the most serious humans would catch the way he's saying it as a joke.
20 years ago, I was being evaluated by a psychiatrist, who was a foreigner with a foreign accent and English as a second language.
There was a vending machine where I lived, and it sold cans of Coke, Sprite, and Hawaiian Punch. I had been choosing the latter, as the "lesser of evils" because it didn't contain caffeine, and perhaps the Vitamin C was not harmful.
So she asked about my diet and habits, and I told her "I've been drinking a lot of Hawaiian Punch." and then she responded that that was very bad for me and I nodded solemnly, and as the conversation progressed into more dissonance, I said "Hawaiian Punch doesn't contain alcohol!"
And she said "Oh, I thought you said you had been drinking a lot of wine punch."
Errors can be a significant problem in manual charting as well.
I know a medical professional who does a similar evaluation process to what is outlined in your second link to human written charts. They then use that feedback to guide the department on how to improve their charting.
So, don't presume that those error rates cited in those studies should be compared to a baseline rate of zero. If you review human-written charts, you will often also not have an error rate of zero.
In the US, HIPAA gives patients a right to access and have corrections added to their medical record.
But in my conversations with a person I know who does this work -- I don't think that the typical problems with patient charts are anything that would be remotely noticeable to a patient -- they're often deficiencies of a technical and/or clinical significance.
I don't think anyone mentioned comparing AI error rates to a base rate of zero. What has been mentioned is significant numbers of clinically significant omissions, and outright hallucinations. Blatant fabrications should never happen with a human scribe, and one would expect clinically significant omissions to be rarer, because a human has clinical judgement that an AI can't have.
That article is from 8 years ago, accuracy is dramatically better today. We see a few percent error rate.
From the 2025 study: Conclusions The CAISs demonstrate high levels of summarisation accuracy. However, there is great disparity between the currently available CAIS products and, while some perform well, none are perfect. Clinicians should therefore maintain vigilance, particularly checking omitted psychosocial details and medications, and scrutinising plausible-sounding insertions. Purchasers and regulators should be aware of the significant performance disparities identified, reinforcing the need for careful evaluation and selection of CAIS products.
This is exactly what I say and how we teach our people to use it. At the end of the day the human is responsible for the accuracy. We do have providers who decline to use AI because they don't want to double check it, and that's fine by us.
> On the gripping hand, people who work in the management end of the US healthcare industry can't be trusted with healthcare or information security to begin with.
No, this blanket statement is far to overly broad. Health insurers are by far the least trustworthy. Provider organizations are a very, very different group. In my 12 years I have never had a PHI breach or leak that wasn't a human making a mistake. No hacks, no credential breaches, no backdoors or zero days, no network infrastructure penetrations. Two former employers had breaches years after I left which I think speaks well to my track record. I take security incredibly seriously. Our patients are the most important part of my job.
I'm glad your organization hasn't had a PHI breach. I'll see your anecdata and raise you mine:
The two biggest hospital providers in my geography have both had breaches in the last 5 years, both involving exfiltration of PHI (and one involving ransomware). (My family's data was in both, too!)
I have a background in IT security and systems administration (including working as a contractor for healthcare providers). Since medical records have become "electronic" I've assumed medical data is de facto public.
If there was a diagnosis or treatment I felt others knowing about would compromise me I would avoid bringing it up to a medical professional or seeking treatment. I'm certain there are people who avoid mental health services, for example, for exactly that reason.
> That article is from 8 years ago, accuracy is dramatically better today. We see a few percent error rate.
Your reading comprehension is not good.
The 2018 study is for "traditional" voice recognition, followed by a human transcriptionist, followed by physician review and signoff.
And it has much lower rates of errors than the 2025 study on LLM transcription.
Really, I think the problem is that the LLM transcribers pretend that they can do the work of the humans. Keep the humans, the accuracy would be probably be on par with Dragon. But then there's no reason to deal with LLM "hallucinations" at all, and the cost/value argument falls apart.
CAIS eliminates educated people who reduce errors, in order to shift profits to ... well, you.
> That article is from 8 years ago, accuracy is dramatically better today. We see a few percent error rate.
I’m a radiographer and get AI generated radiology referrals.
We get very variable quality and I believe it relates to how well they are proof read. One referrer has very poor referrals when written without AI, and ones that look good at a quick glance at the time of booking.
However when you try to scan the patient and read the referral more closely, the AI ones are nonsense and garbage. I blame the referrer.
It’s been a year or so since I last read The Mote In Gods Eye/The Gripping Hand but I randomly was thinking of this morning. Very funny that I would see a reference to it the same day.
I'm pretty sure ai-x writes sarcasm and skips the /s for pure fun. Personally, I'm amused and I like what he's doing. Others have done it before him though, it's not a new trick.
Remember that the difference between "Flock can do whatever the hell it wants" and "Flock is required to delete your data at your request" is a law. Citizens vote for legislators. If you want this to be a higher priority for your legislators, buy them off.
My custom XFWM theme has square corners on windows without focus and large-radius rounded corners on the one window with focus.
The square corners are part of a 2 pixel wide border (one black, one white) because who needs to waste space on handling things we aren't manipulating? But the title bar is high-contrast, because you'll go looking for it when you want to switch windows.
The round corners go with a fairly thick border in a customizable color, usually something very bright in the yellow, orange or cyan ranges. When you sit down, you should immediately know what is active.
reply