Harvard Health Blog
Online symptom checkers: You’ll still want to call a doctor when something’s wrong with you
Doctors make mistakes. I strongly believe in being open about this, and I have written about my own missed or delayed diagnoses on this and other blogs. But guess what? Research supports what I’ve long suspected: when it comes to making the correct diagnosis, doctors are waaaay better than computers.
A recent study compared the diagnostic accuracy of 234 physicians with 23 different computer programs. The authors gave mystery clinical cases of varying severity and difficulty to doctors, and ran the same cases through various online “symptom-checker” programs. The cases came from The Human Diagnosis Project, which itself is a fascinating entity. This project, also known as Human Dx, is a worldwide open-access medical opinion website. People can submit cases that need to be solved, or they can help solve others’ cases. The intention is both for practical use — like, for doctors who are stumped — or as a study tool.
I visited the website, registered, and perused the cases. Here is a typical case on Human Dx:
A 20-year-old female presents with fever and a sore throat. On questioning she also complains of excessive sleepiness. What are the top three most likely diagnoses?
I wrote down:
- infectious mononucleosis (i.e., mono)
- streptococcal pharyngitis (i.e., strep throat), and
- other viral pharyngitis (i.e., just a virus).
And this is exactly what the doctors and the computer-based symptom-checker programs did with 45 distinct clinical cases, with diagnoses ranging from common to rare, and severity ranging from mildly ill to emergency.
These researchers had previously run the cases through 23 different online symptom-checker programs. These are websites where one can type in their symptoms or answer a series of questions for medical advice, like the one on the Mayo Clinic website or WebMD.
Doctors got the correct answer on the first guess about 72% of the time, as compared with a sad 34% for the computer program. Further, doctors got the correct answer in the top three about 83% of the time, as compared with 51% for the computers. Interestingly, when the physicians were separated by level of training, the interns (in their first year out of medical school) got the correct answer in the top three guesses 89% of the time, far better than their senior colleagues.
Obviously, the doctors weren’t perfect — they had a 28% error rate for the number one most likely diagnosis. But, that’s better than the computer programs’ 66% error rate. What the authors envision are programs that can help physicians to improve their diagnostic accuracy.
Until then, if you have to choose one over the other, which one would you pick?
I’m on call…
About the Author
Monique Tello, MD, MPH, Contributor
Disclaimer:
As a service to our readers, Harvard Health Publishing provides access to our library of archived content. Please note the date of last review or update on all articles.
No content on this site, regardless of date, should ever be used as a substitute for direct medical advice from your doctor or other qualified clinician.