The 20th century was an explosion of scientific innovation and discovery. The success of chemistry, physics and biology to produce things such as antibiotics, radiography and genetic analysis could not be ignored.
Medicine is not now, nor has it ever been, a “science.” But over the past century, medicine became science-based. This means that while medicine was not in adherence with the scientific method, it did rely on underlying scientific principles, conceptual frameworks, and methodologies.
As scientific innovation began to stagnate, social activism and political advocacy began to grow significantly. This caused medicine to adopt a different set of value systems, most of which came from the social sciences. This resulted in the modern system of medicine we see today ( i.e., evidence-based medicine).
David Gorski over at Science-Based Medicine frequently discusses the issues that have arisen from this development: “As currently practiced, EBM (evidence-based medicine) appears to worship clinical trial evidence above all else and nearly completely ignores basic science considerations, relegating them to the lowest form of evidence, lower than even small case series. This blind spot has directly contributed to the infiltration of quackery into academic medicine.”
The adoption of evidence-based medicine resulted in a multitude of changes in the underlying conceptual frameworks that medicine uses to make decisions. One example would be utilizing treatment based on statistical and clinical outcomes with zero regard for biochemical roles and functions.
Theoretical biologist Stuart Kauffman (along with the virologist Nessa Carey) pointed out that if you just look at the statistical relationships among components in medicine, it will lead you down a blind alley. You will conclude that the heart exists solely to add weight to the chest: filling up space between the lungs.
I stress this point because statistics is not a science. Statistics can tell you if your data are significant or not, but they cannot say anything about your methodology and experimental design. I mention this because many brilliant clinicians are never given formal education in raw statistics and do not understand probability.
Regardless, many physicians make clinical decisions based solely on “statistics” and “probability” with zero regard for the underlying mechanistic relationships between the variables that they are operating on.
To put it in Kaufman’s terms, they make clinical decisions based on the idea that the heart exists to add weight to the chest and not to pump blood.
To give one example of many: Several well-respected physicians do not believe that proton pump inhibitors (PPIs) increase the risk of contracting the superbug: Clostridium difficile (C. Diff). They reason that “the probability is low,” completely ignoring the fact that the C.Diff grows at an optimum pH of 6, which is within the pH range of stomach acid for patients taking PPIs.
It would seem that this pathological “clinical data above everything else” way of thinking, which permeates almost every aspect of our health care system, has finally begun to reverse course — at least with regards to how the FDA operates.
This is most exemplified by the recent approval of aducanumab for Alzheimer’s patients. As STAT News’ Adam Feuerstein and Damian Garde reported: “Instead of judging Biogen’s treatment solely on its effects on cognition, the FDA granted a conditional approval based on Aduhelm’s ability to clear the toxic proteins, called beta-amyloid.”
While we still lack a complete cause and effect model of Alzheimer’s disease, there is no doubt that the etiology is multifaceted and nonlinear. This accounts for the fact that it takes an incredibly long time for Alzheimer’s disease to develop. I mention this because one of the major problems with FDA approval has always been that it insists on utilizing the same framework and methodology of approving drugs designed to treat long-term diseases as it would with short-term diseases.
To the fullest extent of my knowledge (which isn’t much), there is no model that can quantify the relationship between beta-amyloid levels and cognitive function, let alone with time factored in. Just how it takes Alzheimer’s a long time to develop, it may also take treatment an extended period of time to demonstrate improvements. The FDA needs to start utilizing this framework in how it goes about assessing for approval for therapeutics.
While there are numerous roadblocks to providing a viable treatment for Alzheimer’s, the FDA approval of aducanumab marks the first step towards a mechanistic, science-based approach to medicine. Here’s to hoping more will follow suit.
Robert Trent is a graduate student who blogs at Medaphysics.
Image credit: Shutterstock.com