Published on

John Shufeldt, MD, JD, MBA, FACEP
How many times have you encountered a patient who presents with an issue and tells you about a previously diagnosed condition with which he or she is having ongoing symptoms? It happens to me nearly every shift.
A 35-year-old male presents with chronic back pain. He has been to your urgent care center a number of times in the past and presents again with a variation of the same complaint. You review the past record, noting that one of your predecessors labeled him a “drug seeker.” This time his back pain radiates down both legs but he has no reported weakness, saddle anesthesia or incontinence. Your findings on physical exam, however, demonstrate a slight 4+/5 weakness on right leg extension.

At this point, what do you do? Do you take a step back, reevaluate, and come up with a new plan to ensure the appropriate diagnosis? Or do you “kick the can down the road” and simply treat the symptoms, sending the patient away for the next provider to sort out the diagnosis when the symptoms become more severe? In summary, did you change your beliefs (drug seeker) in light of the additional information (pain radiating down the legs and slight weakness)?

Enter Bayes. Thomas Bayes was an 18th century English statistician and minister known for a theorem that bears his name, which was unpublished until after his death. Bayes’ theorem seems to be a straightforward, one-line rule: By updating our initial beliefs with objective new information, we get a new and improved belief. Or as John Maynard Keynes said, “When the facts change, I change my opinion…”

Bayes’ theorem is credited in cracking of the Enigma code, which allowed the Allies to track down German submarines; in DNA decoding; spam filters; the Google search engine; and in improvements in homeland security.
Bayes’ theorem depends upon a clever pivot: If you want to assess the strength of your assumption given the evidence, you must also assess the strength of the evidence given your assumption.

Regarding the patient above, Bayes would ask three questions:

  • How confident are you in the veracity of the diagnosis of drug seeker?
  • On the assumption that your original diagnosis is correct, how confident are you that the new history and physical is accurate?
  • And, whether or not the original diagnosis is accurate, how confident are you that the new information is accurate?

Make sense? A prior diagnosis can impede our current interpretation of the patient’s condition and bias us to not seek an alternative diagnosis.

Now let’s flip to our patient’s alleged drug-seeking behavior. Refusing to simply kick the can, you ask the patient for a urine sample to test your predecessor’s hypothesis. The high-quality drug test your center uses is 99% sensitive and 99% specific. This means the test will produce 99% true-positive results for drug users and 99% true-negative results for non-drug users. Our patient (selected somewhat randomly) tests positive, not for opioids (prescription pain meds) but for methamphetamines. What do you do now?

Do you throw this patient’s drug-seeking ass out of your urgent care center? Not so fast. Despite the obvious accuracy of the test, if he tests positive, it is more likely that he does not use the drug than that he does. Okay, now you think I’m on drugs.

Let me prove it to you. If 1000 individuals are tested for methamphetamines, we expect to find 995 non-users and five users. From the 995 non-users, 0.01 (99% specificity) x 995, 10 false positives are expected. From the 5 users, 0.99 (99% sensitivity) x 5, 5 true positives are expected. Thus out of 15 positive drug tests, only five or 33% are genuinely positive. Even if the sensitivity was 100% and the specificity 99%, the probability would still be 33%.
Using Bayes:
P = 33.2 = 0.99 x 0.005
0.99 x 0.005 + 0.01 x 0.995

If specificity was increased to 99.5% and the sensitivity decreased to 99%, the probability rises to nearly 50%. These results arise because the number of non-users is very large compared to the number of users, which means that the number of false positives (0.995%) outweighs the number of true positives (0.495%).

Here is another example. Remember when merthiolate was used in vaccines and was thought to cause autism? It made sense at the time that this seemingly causal relationship was linked to the disease of autism. Then came evidence that despite the removal of merthiolate, the rate of autism did not decline. Yet, despite this posterior knowledge (after the outcome of the study), some individuals remain convinced that their prior hypothesis (merthiolate causes autism) is correct.

Back to our patient. You have convinced yourself that the urine drug screen was a false positive and the patient is not abusing or diverting prescription narcotics. Now, your prior hypothesis needs to be altered. More likely than not, the patient is not, as your predecessor decreed, a “drug seeker.” And you have new historical and physical data that the patient may, in fact, have a pathological reason for his symptoms.

Here is where we get ourselves in trouble. A nurse tells us that a patient has new findings or complaints yet we blindly continue down the same diagnostic path we were on before the new symptoms. Or, like the people who still attribute autism to vaccines, we fail to update our new hypothesis when presented with new facts.

Here is the take-home point: Do not be wedded to a prior diagnosis when presented with new information. This happens ALL THE TIME and is a leading contributor to medical misadventures and untoward patient outcomes.

Be fluid. Conditions change, people change and the facts change. Update your analysis when presented with new data and do not fail into the “Well, he DID have a history of XYZ and I went along with what my predecessor determined was the best plan of action” or you may find yourself on the wrong end of an 18th century minister’s theorem.

Bayes’ Theorem and Urgent Care Medicine: Why it Matters

John Shufeldt, MD, JD, MBA, FACEP

Chief Executive Officer at MeMD, LLC, Mentor and Author at Outliers Publishing, Principal at Shufeldt Consulting, Founding Partner of Shufeldt Law Firm
Tagged on: