science trial error

When it comes to medical advice, is less always more?

To the general public, the trial-and-error process of science and medicine may erode confidence, but without it, we’d have no science and medicine at all.

Peter Attia

Read Time 4 minutes

You have probably heard me mention the concept of “strong convictions, loosely held” (see episodes 202 and 103). It is an idea central to science—the best scientists can quickly adapt or change their opinions in light of new, convincing evidence. Science is achieved through trial and error, as well as constant revision of previous beliefs, and this process is critical to advancing knowledge. It enables us to be harsh critics of research, to adopt strong evidence and move science forward, and to discard ideas that do not stand up to scientific scrutiny. Scientific research, medicine, and broader healthcare are inextricably linked and frequently rely on one another to generate meaningful knowledge and improve people’s health. But to the general public, this iterative process and potential backtracking tend to come across as uncertainty, equivocation, or just plain lying, often undermining public trust in medicine. So how can we maintain public faith in the face of the inherent changeability of science?

Is the answer to “do less”?

In a recent commentary piece, Dr. John Cleland argues that sometimes the answer is simple: less is more. Cleland sends a message that healthcare professionals must exercise extreme caution when making recommendations and treatment decisions based on inconclusive evidence, frequently disguised within meta-analyses of inconclusive clinical trials. 

Dr. Cleland argues that making recommendations based on inconclusive or weak evidence presents a two-pronged problem. For starters, it may cause direct harm if a treatment or procedure is found harmful. Second, frequent reversals tend to erode the public’s faith in science and medicine. “Doing less” is a straightforward idea intended to tackle both problems with the simple recommendation against the incautious and liberal use of treatments with questionable efficacy or applicability to the patient. Take, for example, the case of polypharmacy – the practice of prescribing multiple medicines which may or may not be effective. Even if the medications are harmless, inundating the patient with pills and prescriptions may result in lower overall adherence and, as a result, lower consumption of the treatments that are needed. The patient suffers, and the physician is forced to backtrack on previous recommendations.

Dr. Cleland makes valid points, and I absolutely agree that physicians bear a responsibility to be judicious in their treatment recommendations, especially in the contexts cited by Dr. Cleland of aggressive interventionist campaigns targeting large populations (e.g., his example of aspirin as a primary prevention measure for cardiovascular disease). In such cases, providers must maintain a level of skepticism, as these campaigns are often motivated as much (or more) by the financial interests of the pharmaceutical industry as they are by the interests of patients. Healthcare professionals have the difficult but critical task of acting as gatekeepers, discerning which treatments are not adequately supported by evidence and approaching patient care accordingly.

Evidence builds confidence, not proof

But what does it mean to be “adequately” supported? Almost no scientific hypotheses – and therefore no medical advice derived from those hypotheses – can ever truly be proven. Science is not mathematics, and we never close a scientific paper with the expression Q.E.D. or quod erat demonstrandum. We can accumulate supporting evidence in order to build the level of confidence we have in a given idea, and at best, evidence reaches the point where reasonable doubt no longer exists, but all it takes is one unexpected discovery to find we were wrong all along. 

Take the example of evolution: we have oodles of strong evidence supporting the idea that species evolved over billions of years through natural selection, and so far, no reliable evidence to refute that. It is as close as we can get to a scientific fact. But we can’t technically prove it – in large part because we don’t know what we don’t know. A millennium from now, a race of aliens might visit and explain that they planted evidence of evolution and introduced various life-forms to the planet just to see if any would figure out the puzzle. A very, very unlikely explanation, but technically, we can’t prove it’s not the case. 

In medicine, we are often dealing with hypotheses for which we have far less evidence than we have for evolution, so this principle applies all the more: accumulating evidence increases our confidence in ideas that can’t be proven outright. Of course, science has produced plenty of reliable evidence to support the majority of current medical advice, but nothing is completely without risk of reversal. 

Circumstances dictate the necessary level of confidence

So where do we draw the line for an acceptable level of confidence?

It depends on the circumstances. It is essential for a physician to conduct a risk assessment with the greatest individual benefit in mind, and in some cases, an aggressive – and even risky – treatment plan is indeed the best path forward for a patient. When prescribing medications for disease prevention in healthy, average-risk individuals, we have very little reason to take risks on drugs for which we have limited data. But in cases of profound suffering, urgency, or terminal illness, the calculation changes. For someone dying of cancer, enrollment in a Phase II clinical trial for a novel treatment might be the only glimmer of hope left, and no reasonable physician would stand in the way of that hope because of the lack of existing evidence to support the treatment’s efficacy or to rule out possible adverse events. Between the extremes of a completely healthy patient and one circling the drain, there exist an infinite number of circumstances that warrant their own unique risk-benefit calculations.

Reevaluation is the heart of the scientific process

It is a physician’s responsibility to make judgments on what constitutes an appropriate level of caution for a given situation, and, as Dr. Cleland suggests, sometimes that will indeed mean “doing less.” But what Cleland’s editorial fails to communicate is that no amount of evidence constitutes absolute proof, so every prescription or treatment carries some level of risk – however minuscule – that future research will invalidate it. The fact of the matter is that in any aspect of science, many questions remain unanswered, and physicians must nevertheless make decisions and recommendations in light of this changing scientific landscape. Thus, the only true way to avoid erosion of public trust in medicine is for the public to understand and accept these basic scientific principles of uncertainty, progress, and revision. Because if “doing less” means “only intervene when we have 100% certainty,” the medical community will find itself doing nothing at all…

Disclaimer: This blog is for general informational purposes only and does not constitute the practice of medicine, nursing or other professional health care services, including the giving of medical advice, and no doctor/patient relationship is formed. The use of information on this blog or materials linked from this blog is at the user's own risk. The content of this blog is not intended to be a substitute for professional medical advice, diagnosis, or treatment. Users should not disregard, or delay in obtaining, medical advice for any medical condition they may have, and should seek the assistance of their health care professionals for any such conditions.
Facebook icon Twitter icon Instagram icon Pinterest icon Google+ icon YouTube icon LinkedIn icon Contact icon