From Abracadabra to Zombies | View All
"It's become politically correct to investigate nonsense." --R. Barker Bausell
Science-based medicine (SBM) evaluates health claims, practices, and products by the best scientific evidence available.* Central to the idea of science-based and contrasting with evidence-based medicine is the notion that science exists as an interdependent network of theories, knowledge, and laws. Evidence-based medicine (EBM) considers as scientific evidence any results from a clinical trial (and subsequent meta-analyses and systematic reviews of clinical trials), regardless of whether that clinical trial was grounded in scientific plausibility. The authors of the SBM blog put it this way:
EBM is a vital and positive influence on the practice of medicine, but it has its limitations. Most relevant to this blog is the focus on clinical trial results to the exclusion of scientific plausibility. The focus on trial results (which, in the EBM lexicon, is what is meant by “evidence”) has its utility, but fails to properly deal with medical modalities that lie outside the scientific paradigm, or for which the scientific plausibility ranges from very little to nonexistent.
Implausible claims, such as the claim that water has memory (homeopathy) or the claim that unblocking chi restores health (acupuncture), must be seen against the backdrop of our entire body of scientific knowledge. As such, they are implausible and that implausibility should be considered when evaluating clinical trials involving such claims. On the other hand, critics of EBM find fault with the fact that some medical procedures are not based on clinical trials. Scientific plausibility can be determined without recourse to a randomized controlled experiment in some cases, however. You don't need to have some people jump out of an airplane without a parachute to contrast the results with those who jump with a parachute to know which procedure is safer. Likewise, some medical practices might be well-advised, based on scientific knowledge and plausibility, without requiring clinical trials. Would you require a clinical trial before you'd let a medical person apply a tourniquet above a bleeding artery?
SBM considers the randomized controlled trial (RCT) the gold standard for minimizing bias, but it is not the only standard in science-based medicine. Kimball Atwood of the SBM blog puts it this way:
...everyone here agrees that large RCTs are the best tools for minimizing bias in trials of promising treatments, and that RCTs have repeatedly demonstrated their power to refute treatment claims based solely on physiology, animal studies, small human trials, clinical judgment, or whatever.
What SBM requires is the recognition that it is a necessary condition before doing a clinical trial on a medical treatment that there be a "reasonably high prior probability" that the treatment will work. There is something both illogical and unethical about doing a clinical trial to test, for example, whether bee pollen cures cancer or shark cartilage cures arthritis. Why? Because there is no good scientific evidence that indicates doing so would benefit anyone. Doing so on the whim of some U.S. Senator whose dog was cured of cancer by eating bee pollen (or so he thinks) is illogical. Exposing people or animals to substances for which there is no plausible science that would justify the exposure is unethical. Furthermore, as Kimball Atwood notes:
Some claims are so implausible that clinical trials tend to confuse, rather than clarify the issue. Human trials are messy. It is impossible to make them rigorous in ways that are comparable to laboratory experiments. Compared to laboratory investigations, clinical trials are necessarily less powered and more prone to numerous other sources of error: biases, whether conscious or not, causing or resulting from non-comparable experimental and control groups, cuing of subjects, post-hoc analyses, multiple testing artifacts, unrecognized confounding of data due to subjects’ own motivations, non-publication of results, inappropriate statistical analyses, conclusions that don’t follow from the data, inappropriate pooling of non-significant data from several, small studies to produce an aggregate that appears statistically significant, fraud, and more*....
When RCTs are performed on ineffective treatments with low prior probabilities, they tend not to yield merely ‘negative’ findings, as most physicians steeped in EBM would presume; they tend, in the aggregate, to yield equivocal findings, which are then touted by advocates as evidence favoring such treatments, or at the very least favoring more trials—a position that even skeptical EBM practitioners have little choice but to accept, with no end in sight.*
It is illogical to give more weight to a set of questionable clinical trials that indicate some minimal level of support for a treatment than to a body of scientific knowledge that indicates the treatment lacks any scientific plausibility. EBM doesn't consider scientific plausibility as significant a factor in evaluating the evidence for a treatment as SBM does. SBM considers scientific plausibility a necessary, though not a sufficient, condition for doing a clinical trial and for putting a treatment into practice.
Finally, SBM holds that Bayesian probability is superior to using the p-value of “frequentist statistics” as a measure of the data from a clinical trial. An EBM model might look at a positive test on a mammogram and conclude that if you test positive, you have an 80% chance of having cancer since 80% of those with breast cancer test positive on a mammogram. An SBM (Bayesian) model would take into consideration the prior probability of you having breast cancer. One in one hundred women have breast cancer. So, if you tested positive, you have a 7.8% chance of having breast cancer. (See the math here.) Also, many researchers and medical journals mistakenly think that a p-value of 0.05 means that there is a 5% chance that the null hypothesis is true. Worse, many researchers and readers of medical journals mistakenly think that a p-value of 0.05 means that there is a a 95% chance that the hypothesis of the study is true. A p-value of 0.05 means that if the null hypothesis is true, it will be rejected in 5% of trials over many trials. Thus, the p-value for a single trial doesn’t provide conclusive evidence that a hypothesis is correct. We need many trials before we should assert with confidence that the null hypothesis is true or false.*
Of SBM and EBM Redux. Part II: Is it a Good Idea to test Highly Implausible Health Claims? "This is the second post in a series prompted by an essay by statistician Stephen Simon, who argued that Evidence-Based Medicine (EBM) is not lacking in the ways that we at Science-Based Medicine have argued....Part I of this series provided ample evidence for EBM’s “scientific blind spot”: the EBM Levels of Evidence scheme and EBM’s most conspicuous exponents consistently fail to consider all of the evidence relevant to efficacy claims, choosing instead to rely almost exclusively on randomized, controlled trials (RCTs). The several quoted Cochrane abstracts, regarding homeopathy and Laetrile, suggest that in the EBM lexicon, “evidence” and “RCTs” are almost synonymous."
An Intuitive (and Short) Explanation of Bayes’ Theorem by Kalid Azad
Lecture 1: Science Based Medicine vs.Evidence Based Medicine by Dr. Harriet Hall (Course Guide available here.) This lecture is part one of a ten-part series.)