Misdiagnosis: is it what doctors think, or HOW they think?

As a heart attack survivor who was sent home from the E.R. with a misdiagnosis of indigestion despite presenting with textbook symptoms (central chest pain, nausea, sweating and pain radiating down my left arm), I’m pretty interested in the subject of why women are far more likely to be misdiagnosed in mid-heart attack compared to our male counterparts.

Dr. Pat Croskerry is pretty interested in the subject of misdiagnosis, too. He’s an Emergency Medicine physician, a patient safety expert and director of the critical thinking program at Dalhousie University Medical School in Halifax. In fact, he implemented at Dal the first undergraduate course in Canada about medical error in clinical decision-making, specifically around why and how physicians make diagnostic errors. Every year, he gives a deceptively simple critical thinking quiz to his incoming first-year med students.*

So here’s your chance to practice thinking like a doctor. Try answering these yourself, but as Dr. Croskerry advises, don’t think too hard. If you were an Emergency Department physician, paramedic or first responder, he warns, you’d have only seconds to size things up and make a decision. Don’t read ahead to peek at the answers! Now, here are your questions:  

1.  A bat and a ball cost a total of $1.10.  The bat costs $1 more than the ball. How much does the ball cost?

2.  If five machines take five minutes to make five widgets, how long would it take 100 machines to make 100 widgets?

3.  In a lake with a patch of lily pads in the centre, every day the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long did it take for the patch to cover half the lake?

4.  A study of 1,000 people includes five engineers and 995 lawyers.  A randomly selected participant named Jack is 36 years old, unmarried, introverted, likes to write computer programs, and reads science fiction in his spare time. Is Jack most likely to be an engineer or a lawyer?

Here are the answers:

1.  The bat costs $1.05 and the ball costs five cents. (Most people would say that the bat costs $1 and the ball costs ten cents, which would make the bat $.90 more than the ball, not $1).

2.  It takes 100 machines five minutes to make five widgets (as it would take 1,000 machines five minutes as well; the time would remain the same, not 1,000 minutes as most people might say).

3.  If the patch of lilies doubles every 24 hours, then the day before the 48th day, the patch would be half what it is on the following day (not 24 days as many would guess).

4.  Jack is a lawyer – who’s already been described as a “randomly selected” participant, meaning that the chance of him being one of the five engineers is only .5%. *

How did you do?  Only 2-3% of Dr. Croskerry’s brainy med students get all four quiz questions right. As he told the Halifax Chronicle Herald last year:

”   During the last 40 years, work by cognitive psychologists and others has pointed to the human mind’s vulnerability to cognitive biases, logical fallacies, false assumptions and other reasoning failures.

“It seems that much of our everyday thinking is flawed – and clinicians are not immune to the problem.”

He tells his medical students that about 75% of diagnostic errors are not due to a knowledge deficit, but to cognitive failures, or the way that people think.

Cognitive failures may be linked to the lack of what’s called situational awareness.  This involves:

  • knowing what has gone before
  • what is happening now
  • anticipating what is coming
  • having one’s cognitive engine in the right gear

For many years, researchers have studied how critical signals guide decision-making (for example, Swets et al. 1961). But Dr. Croskerry warns that few signals arrive in complete isolation. They’re usually accompanied by some degree of noise or interference, as he explains:

“Similarly in medicine, a particular problem for physicians is the degree of overlap among diseases. Some conditions (like shingles, basal skull fracture or shoulder dislocation, for example) usually present little challenge for diagnosis; they are relatively unambiguous and readily identified, and accompanied by very little noise.

“Other diseases (pericarditis and acute myocardial infarction, for example) manifest themselves less clearly and may be mimicked by other conditions.

“Worse still, some conditions (such as ureteral colic and dissecting abdominal aneurysm, or subarachnoid hemorrhage and migraine) may show complete overlap in their symptomatic presentation.”

Dr. Croskerry adds that in these latter examples, the probability of correctly diagnosing the disease on the basis of clinical presentation may be no better than chance because noise may completely overlap the signal.

Diagnostic failure rates in modern hospitals approach 15%, a rate Dr. Pat describes as “staggering”. 

In a 2011 study, for example, he and his team looked at factors contributing to what they called “abysmal diagnostic failure rates”. They studied patients who had been victims of “serious misdiagnoses” on a hospital’s internal medicine service.  Here’s what they found:(1)

  • 7% of diagnostic failures could not have been avoided (no fault)
  • 28% – cognitive error only  (error in the way the physician thinks)
  • 19% – system-related (sent the wrong charts, mis-identified patient)
  • 46%  – a combination of both system-related and cognitive errors

Researchers concluded that the use of checklists could address many of these diagnostic failures:

“Diagnostic errors are common and can often be traced to physicians’ cognitive biases and mental shortcuts. A great deal is known about how these faulty thinking processes lead to error, but little is known about how to prevent them.

“Faulty thinking plagues other high-risk, high-reliability professions, such as airline pilots and nuclear plant operators – but these professions have reduced errors by using checklists.”

Surgeon-turned-author Dr. Atul Gawande has compared using checklists in medicine to a requirement that “house movers,wedding planners, and tax accountants figured out ages ago.”

As Dr. Croskerry’s team explained:

“Pilots do not have the option of skipping their checklists when the risk is low (sunny day, familiar airport, experienced crew). However, any recommendation to physicians to ‘use this checklist exactly when you think you don’t need it’ will likely be met with skepticism.  “It would be tempting to use checklists only when we lack confidence in our diagnoses, but confidence is a poor predictor of diagnostic accuracy.”

In the 2011 research on diagnostic failure that Dr. Croskerry worked on, his team acknowledged some key differences between diagnostic checklists and those already well-accepted in aviation and industry.

In both medical and non-medical settings, for example, checklists are read aloud by teams rather than silently by individuals.

“But diagnosis is usually silent, lonely work, and a natural pause point to review the checklist, such as before takeoff or before incision, does not exist in diagnosis, which can stretch over hours, days or even months.”

Checklists have gained broad acceptance in medical settings such as operating rooms and intensive care units, but the 2011 study extended the checklist concept to diagnosing by providing an alternative to reliance on intuition and memory in clinical problem-solving.

“This kind of solution is demanded by the complexity of diagnostic reasoning, which often involves sense-making under conditions of great uncertainty and limited time.

“The key to reducing diagnostic errors may be less tied to checklists than to a diagnostic time-out – a brief pause to reflect on our diagnostic reasoning while reviewing a checklist and documenting the procedure in the medical record.”

*  This cognitive reflective decision-making test was originally designed by Dr. Shane Frederick at M.I.T.*
(1) Checklists to Reduce Diagnostic Errors. Ely, John W. MD; Graber, Mark L. MD; Croskerry, Pat MD, PhD. Acad Med. 2011 Mar;86(3):307-13.

.

Q:  Should diagnostic checklists be commonplace in medicine?

.NOTE FROM CAROLYN:   I wrote more about cardiac diagnostic tests that have been developed and researched on (white middle-aged) men in my book, A Woman’s Guide to Living with Heart Disease. You can ask for it at your local library or favourite bookshop (please support your local independent booksellers) – or order it online (paperback, hardcover or e-book) at Amazon –  or order it directly from my publisher, Johns Hopkins University Press (use their code HTWN to save 30% off the list price).

See also:

When doctors can’t say: “I don’t know”

Misdiagnosis: the perils of “unwarranted certainty”

When your “significant EKG changes” are missed

The ’18 Second Rule’: why your doctor missed your heart disease diagnosis

Heart attack misdiagnosis in women

Seven Ways To Misdiagnose a Heart Attack

Doctors who aren’t afraid of “Medical Googlers”

19 thoughts on “Misdiagnosis: is it what doctors think, or HOW they think?

  1. While I am no longer in direct patient care, I am still involved with Nursing Assessments, even if after the fact (7 days, yay government paperwork).

    I was reading over the tests ordered on one of our patients with Dementia. Urine Dip – negative, UA – Negative, CBC, CMP all negative. This patient’s change in behaviors was being tested every which way from Sunday.

    After reading everything over for an hour, I asked the charge nurse if anyone had thought to offer the patient acetaminophen. “She denies pain.” She has Dementia, do you think she understands your question completely? Tylenol was given and the behaviors stopped. The patient had pain she was unable to verbalize.

    Tweaking how we think or even how the patient answers is needed.

    Liked by 1 person

    1. Interesting… So much of medicine is simply figuring out what the problem is NOT, so the culprit sometimes lies waiting through a long process of elimination. Good detective work there, Nurse Elizabeth!

      Like

  2. Computers don’t seem to carry the same bias that humans do when making diagnoses. See this article. It’s probably not far in the future that the same computer that handles the medical record will also help with the diagnosis — I hope that day is not far off.

    Like

    1. Thanks for that link, Dr. B. I have mixed feelings on this issue. First, I’m pretty sure that if the ER doc who misdiagnosed me with GERD, despite my textbook MI presentation, had just entered my symptoms into IBM Watson (or even Dr. Google), the only possible diagnosis would have come back as myocardial infarction. On the other hand, many patients are already concerned that doctors seem to be losing the crucial skills required for a hands-on physical exam. I anticipate a day when – thanks to gee-whiz tech like IBM WatsonPaths or Google Glass – even things like basic eye contact during a health care interaction will become history.

      Like

    2. A few days ago, at a Stanford Medicine X talk, venture capitalist Vinod Khosla suggested computers armed with big data should make most clinical decisions. Read a short discussion of this presentation.

      Like

      1. Thanks Genevieve. Yes, I’d seen this. But remember that this is Vinod Khosla we’re talking about here – who has made his billions in venture investments in various tech sectors. Of course he’s betting on computers to disrupt medicine as we know it. For years he’s predicted that one day, machines will replace 80 percent of doctors. But as this article observes: “Big Data is cold science… that might predict the incidence of diabetes by learning that it correlates with some random marker like purse thefts or banana imports.” My own vision is a systemic shift in how Real Live Physicians approach clinical decision-making – not moving towards abdicating the task to machines.

        Like

  3. I’m unsure just how things are done in other states and countries, but the problem with education in Texas has become state standardized testing performance. There is so much pressure to have students perform well they teach to the test or how to beat the test.

    I have met so many kids my son’s and daughter’s age, 23 & 14, that have good grades in school but can’t find their way home to or from school or the grocery store if they had to walk. Also, our children are entirely too dependent on technology. What if Y2K happened or something similar? A computer hacker or terrorist finally destroys the grid. I digress.

    While working at Stanford University Hospital, I met many bright young physicians but many had only book smarts not life experience to put their learning into context. That is where we nurses came in. We would explain practical application of the science to the patient and they weren’t just a research candidate. Other times I have had to be the patient advocate and question a doctor’s action or inaction.

    Liked by 1 person

    1. Hello Beth – You’re describing a massive chasm in medical education between the brainiacs who successfully make up that 97th percentile of applicants for med school and the slightly less brainy candidates who actually have life experience AND smarts (and a personality would also be nice!) I’m hopeful that this is changing as med schools broaden application criteria to include life experience, but alas, I’ve met my fair share of docs over the years with zero bedside manners or even the most basic of communication skills.

      Like

  4. I enjoyed this, Carolyn. I missed # 2 because it’s not clear from the question whether each machine can produce a widget independently of the others or if all 5 machines are required to act together to produce one widget. I was overconfident in my assumption — as happens in medicine.

    Mistakes are made because doctors assume their assumption is correct and don’t inquire further to see if perhaps the assumption is mistaken.

    That sounds to me like exactly what happened when you were misdiagnosed. 🙂

    Liked by 1 person

    1. And I suspect you’re right, Jan. In 2003, the journal Academic Medicine listed seven examples of identified thinking errors, including what you’re describing: Anchoring Bias (“locking on to a diagnosis too early and failing to adjust to new information”)

      Thanks for your comment here!

      Like

  5. I got every one of them right, and it’s scary that so few med students do.

    What’s especially scary is that often confidence is accepted as a virtue in itself. The problems are based in logic and understanding of what probability means, though careful reading helps. The last one is an indicator of the sheer weight of stereotypes in one’s thinking, as the instructions (“most likely”) point clearly to the answer. On the other hand, if by chance Engineer Jack had been randomly selected, he still would be an engineer 100%…

    Liked by 1 person

    1. Congrats Kathleen! I got 3/4 correct on this quiz (that bat and ball question threw me off!) I personally witnessed an example of unwarranted confidence in decision-making recently when I observed an Emergency physician teaching two med students while diagnosing my friend (later correctly diagnosed with a severe drug toxicity reaction called Stevens-Johnson Syndrome).

      But in the ER, the doc took out a small ruler and clearly pointed out to his students how one of my friend’s nostrils seemed to be a millimetre or two lower than the other, and then asked her: “Has your smile always been a bit crooked?” before explaining to his students that this was likely a case of Bell’s Palsy. By then, my friend’s eyes, lips and tongue were increasingly purple and swelling by the minute – and even I knew that this was no Bell’s Palsy case. Yet I watched, horrified, at how the doc was leading his med students to this (quite wrong) differential diagnosis while they just stood there and nodded in agreement.

      Like

      1. Oh, that’s horrible! What I wonder is: Did the students ever learn that he was wrong!wrong!wrong!

        Have you ever posted on the study of radiologists that Jerome Groopman discusses in How Doctors Think? Radiologists were asked to read a series of images and also to rate their own certainty of diagnosis. The radiologists who were most accurate (95% and better) were the most uncertain, while the least accurate (75% – think of that…) were the most certain.

        I had the same split-second response to the bat and ball question that most people do, but immediately knew it was wrong because then the bat would not be $1 more than the ball. From then it was a quick adjustment.

        The questions are really very good at identifying potential patterns of error.

        Liked by 1 person

      1. They get really tacky and retaliatory when you ask for a second opinion. As a nurse, I’m constantly in the line of fire for asking questions, but if I have to I will take it to my medical director or the chief of staff. My duty is to my patient. Their life and safety comes before the doctor or even my own ego.

        Like

Your opinion matters. What do you think?