3 min read

Why your doctor's AI recorder can be bad for your health (and privacy)

Privacy experts have nine good reasons why you can — and should — decline a doctor or therapist's request to record your conversations with AI tools.
a person sitting at a desk with a computer, phone in hand, as if to talk to an AI recording tool on his phone.
Photo by Vitaly Gariev / Unsplash

You might not realize this but your doctor, therapist, or any other physician you see could be using AI to record what you say and generate your patient notes. The use of these tools should be proactively disclosed, but they aren't always.

Now, Professor Emily M. Bender and journalist Decca Muldowney warn in their excellent newsletter about the rising use of AI recording tools in doctor's offices and medical settings, and why you should be resistant to their use.

"These systems take in a (presumably audio-only) recording of the patient encounter and then output a draft patient note for the chart... These scribing tools are being advertised as time-saving programs that allow healthcare workers to focus more on the patient and less on note-taking, but we are highly skeptical of these claims."

This is a really good piece, with a short nine-point bullet list of reasons why you should politely decline a doctor's request to record you with AI tools, or to proactively declare your wish to opt out.

For the longest time, medical professionals would use handheld dictaphones that record and store the audio conversation with their patients to the device's memory. These voice recorders rely on the physician listening back to the recording, transcribing the audio by hand, or using an online transcription service that's regulated and cleared for use in healthcare. Nowadays, newer AI-enabled and internet-connected voice recording tools suck up the audio of the conversation and use a consumer cloud service and AI tools to transcribe the audio into text, leaving a copy of the audio and the text describing your doctor's visit in the hands of whatever AI company or public cloud that the recorder relies on. 

That's not great from a security and privacy point of view, especially given the inevitability of security lapses or data breaches.

But an equally important aspect of this is that AI products can't be trusted to produce accurate results. Errors happen, but AI products can sometimes just make things up, otherwise known as "hallucinations." AI may get better over time but it's not there yet, and the last thing you want is having something as unpredictable and inaccurate as AI in a medical setting, where specific details and accuracy really matter, and can mean the difference between being healthy and not.

As Bender and Muldowney note, these issues also present a challenge for informed consent: Are patients aware that their conversations are sitting on someone else's computer somewhere in the digital ether? And, are patients aware that this information can be used to improve AI models? On the flip side, does the patient even want to open up to their doctor if they know that they are being recorded?

This isn't just an important piece for patients and consumers of healthcare to read, but also for decision-makers in the healthcare industry to hear and consider the business counter-arguments, rather than falling into another technological fad that only makes things worse in the long run.

~ ~

Thank you so much for reading ~this week in security~. Please consider a paying subscription or submit a one-time tip to show your support. Feel free to reach out with any feedback, questions, or comments about this article: this@weekinsecurity.com.