When Explanations Become Dangerous: An Emergency Physician’s View on Explainable AI

When Explanations Become Dangerous: An Emergency Physician’s View on Explainable AI

In the emergency department, the most dangerous moment is not when we know nothing. It is when we think we understand what is happening.

Chest pain comes in at 2 AM. Vitals are stable. The first ECG looks fine. Labs are pending. The patient is young. It is very easy to start building a reassuring story before the dangerous causes are excluded.

Every emergency physician is trained to resist that instinct. Now we are introducing AI systems that explain their reasoning in clean, structured paragraphs. And we are assuming that explanation makes them safer.

From the bedside, that assumption deserves scrutiny.

Explanations Do More Than Inform

In clinical care, explanations are not neutral. They shape action. If a system says, “This chest pain is likely muscular because vitals are stable and labs are normal,” it sounds reasonable. The physiology checks out. The language is measured.

But in the emergency department, stable vitals do not rule out pulmonary embolism. Normal early labs do not eliminate acute coronary syndrome.

When reassurance appears before exclusion, it changes posture. It lowers vigilance. It subtly shifts the threshold for escalation.

That is where explainability becomes risky.

The Subtlety Is the Problem

The concerning examples are rarely absurd.

“Symptoms are consistent with anxiety given normal CT.” That sounds responsible. But normal imaging does not close the case. Anxiety is a diagnosis of exclusion. If the explanation arrives too early, it can compress uncertainty into comfort.

Or consider "Findings suggest low risk at this time.”

Low risk compared to what? Over what time frame? What is the plan if symptoms evolve? When should the patient return?

In emergency medicine, safety is built on sequence. You rule out the catastrophic causes first. Only then do you narrate the benign pathway.

If an AI explanation reverses that sequence, even subtly, it can reinforce premature closure.

Timing Is Clinical Judgment

Medical safety is not only about whether the final conclusion is correct. It is about when reassurance is offered and what has been excluded beforehand.

We are trained to ask ourselves, "What can I not afford to miss?" That question anchors our reasoning.

An explanation that feels complete can interrupt that question. It can create a sense that the cognitive work has already been done. In a busy clinical environment, that matters.

Transparency Is Not the Same as Protection

There is a growing belief that if a model shows its reasoning, it becomes safer. That transparency allows clinicians to verify and validate.

In theory, that is true. In practice, a well-structured explanation can feel persuasive enough that it is not interrogated. Confidence has weight. Structure has authority.

A fluent narrative that collapses uncertainty can be more dangerous than a fragmented answer that signals doubt.

The risk is not that the model explains. The risk is that the explanation reshapes how danger is perceived.

A Different Standard for Evaluation

When we evaluate explainable systems in healthcare, the question should not only be whether the explanation is medically coherent.

We should ask:

  • Does it preserve uncertainty where uncertainty exists?
  • Does it clearly separate what is ruled out from what is assumed?
  • Does it prioritize catastrophic exclusions before benign explanations?
  • Does it encourage reassessment thresholds?

If the explanation sounds polished but quietly lowers vigilance, it is not safe.

Explainability needs to be stress-tested under variation, under emotional framing, and under ambiguous data, the same way triage decisions are tested. Because at the bedside, errors do not usually come from wild guesses.

They come from reasonable stories formed too early.

Why This Matters

AI systems in healthcare will only become more articulate. Their explanations will feel more natural, more confident, and more aligned with clinical language.

That evolution does not automatically increase safety. In medicine, the most dangerous answer is not always the one that is obviously wrong. It is the one that sounds right before it is ready to be.

Understanding is powerful. But so is misplaced reassurance. And in emergency care, reassurance given too soon can be the beginning of harm.

Also Read