(10) Plausibility Bias and the Widespread Opinion That Homeopathy Has Been “Disproved”

More chapters on bias: Part 8: Industry Bias – A New Form of Bias or an Interesting Experimenter Effect? (with an introductory explanation of the term “bias”) and Part 5: Empiricism and Theory (1) – Bayes Bias

An analysis of systematic reviews and meta-analyses on homeopathy, unless done in detail and knowledgeably, usually comes to the conclusion that there is insufficient evidence for the efficacy of homeopathic medicines over placebo. From this, most readers, journalists and also many scientists then make the statement “Homoeopathy is ineffective” and some people who do not think very deeply or even pursue some other interests, then even say: “The ineffectiveness of homoeopathy is scientifically proven”.

We notice an increase here: from “lack of evidence of the difference between placebo and homeopathy”, to “lack of evidence of efficacy”, to “evidence of inefficacy”. Where does this come from? A recent paper by Rutten and colleagues [1] introduces an interesting term to make this understandable: plausibility bias. This means: What we consider a priori to be conceivable, possible and reasonable also shapes the way we deal with data. Using the often-cited meta-analysis by Shang and colleagues [2], I want to run through this with the reader here.

I want to make one thing clear from the outset: it is not correct to say that homeopathy is ineffective, and that this is scientifically proven. It is only seen that way because homeopathy seems implausible to the vast majority of people at first glance.

In the first step, the Shang analysis [2] compared 110 homeopathic studies and just as many conventional studies that treated the same clinical picture and examined them in the context of a study of roughly the same size. Taking all the studies together, the result of the conventional and the homeopathic studies is surprisingly similar: both forms of intervention show a small superiority over placebo. In fact, they are so close that even the authors themselves are surprised to find that there is hardly any difference. They even emphasize that this cannot be related to methodological weaknesses in homeopathy studies.

Because 19% of homeopathic studies but only 8% of conventional studies were methodologically very good. Then Shang and colleagues did something rather unusual: Instead of analysing all the studies, they used in a second step of analysis only 8 of the 110 homeopathy studies and 8 conventional studies of similar size, but dealing with completely different diseases. This only became clear much later, when many readers and authors protested and wanted to see the list of studies that had been included in the analysis. If you now analyse only these 8 studies and compare the analysis result with the 8 conventional studies, you find that these 8 homeopathic studies could not prove any difference between homeopathy and placebo when taken together, whereas the chosen conventional studies could very well.

The selection of these studies has now been heavily criticized. Firstly, because it was unclear for a long time which studies these were. Secondly, because the selection criteria seem arbitrary. They were the “largest” studies, say the authors. But what is large? Is a study with 98 patients large, like my own, which was the last of the 8 included studies [3]? Why not one with 90 patients, which was no longer included in the analysis? These criteria and their rationale remained obscure [4,5]. A re-analysis of the data showed that the conclusions change if one changes the number of studies in the analysis, e.g. adds two or three or five more studies [6]. Such a so-called “sensitivity analysis” is actually part of the standard of every meta-analysis, and should have shown the authors that their conclusions are not robust and therefore not scientifically justified. The authors did not present such a sensitivity analysis.

Since there is no scientific reason at all why one should take exactly those 8 and not perhaps 7 or 10 or even more, the conclusion of the analysis remains scientifically questionable. Interestingly, this substantial criticism is completely ignored by virtually all authors who cite the Shang analysis to prove the ineffectiveness of homeopathy.

Another interesting detail: Shang and colleagues say there is a subset of studies on respiratory infections where homeopathy actually does very well, statistically significant and with clinically relevant effect sizes. Other interpretations challenge this subset of data. Because 11 of the total 21 studies do not have a clear positive result, but only show a positive but non-significant trend, they have a problem with statistical power because the effects are not that large and the studies are rather small. However, an analysis of the conventional comparative studies used by Shang shows a comparable picture: 9 out of 21 conventional respiratory infection studies are unclear and have no significant effect. So the pooled effect is about the same in both study ensembles. Why do we now assume that the homeopathic studies failed to prove efficacy, while we stick to the opinion that the conventional studies would have shown efficacy? One cannot see this difference in the data. These views are a result of that very plausibility bias.

Bias always means a distortion of perception. In plausibility bias, perception is distorted by what we find plausible: most of us have no idea how homeopathy is supposed to work. So we either ignore the data, or don’t interpret it correctly. Rutten and colleagues are clinicians, and they point out that their apriori willingness to see homeopathy as potentially effective comes from having seen repeated clinical effects of homeopathy themselves before. Those who haven’t, often interpret the data differently.

Rutten and colleagues also point out that it has been and still is common in medicine for effective interventions to be developed from experience and proven through clinical use long before research may make it clear why something works.

A frequently cited example of this process is acetylsalicylic acid (ASA), better known under the brand name “Aspirin”. In folk medicine, willow bark tea and extracts, which contain a similarly acting precursor of ASA, have been used against pain since ancient times. In 1897, ASA was synthesised by Bayer and then marketed as “Aspirin”. The mechanism – inhibition of prostaglandin synthesis – was not elucidated until 1971, and since then we have understood more and more details of the mechanism of action of salicin compounds.

From a clinical point of view, it would have been completely implausible to reject the proven use of salicin compounds simply because it was not (yet) understood how the substances work.

With homeopathy the problem is a bit more profound: here, on the basis of established knowledge, one cannot even imagine why it should work. But even this should at most be cause for healthy scepticism, which is always completely justified, but should not immediately lead to complete rejection and denial of perception.

But that is exactly what is happening in wide circles at the moment – and that is exactly what plausibility bias is: a misperception, a refusal to perceive facts because they do not fit one’s own worldview. This is not – actually – how science should proceed; but it often does anyway, perhaps simply because it is less disruptive to one’s own view of the world.

The moral of the story? The statement that the ineffectiveness of homeopathy has been proven is in itself unscientific, because it cannot be verified, does not comply with the facts and is the result of a plausibility bias. Probably we should much more often review and check our initial assumptions with which we view the world and on the basis of which we consider it conceivable what can occur in it; that is, if we want to avoid the plausibility bias.

 Sources and literature

  1. Rutten, L., Mathie, R. T., Fisher, P., Goosens, M., & van Wassenhoven, M. (2012). Plausibility and evidence: the case of homeopathy. Medical Health Care and Philosophy, doi: https://doi.org/10.1007/s11019-012-9413-9.
  2. Shang, A., Huwiler-Münteler, K., Nartey, L., Jüni, P., Dörig, S., Sterne, J. A. C., et al. (2005). Are the clinical effects of homeopathy placebo effects? Comparative study of placebo-controlled trials of homoeopathy and allopathy. Lancet, 366, 726-732.
  3. Walach, H., Gaus, W., Haeusler, W., Lowes, T., Mussbach, D., Schamell, U., et al. (1997). Classical homoeopathic treatment of chronic headaches. A double-blind, randomized, placebo-controlled study. Cephalalgia, 17, 119-126.
  4. Walach, H., Jonas, W., & Lewith, G. (2005). Letter to the Editor: Are the clinical effects of homoeopathy placebo effects? Comparative study of placebo-controlled trials of homoeopathy and allopathy. Lancet, 366, 2081.
  5. Fisher, P., Bell, I. R., Belon, P., Bolognani, F., Brands, M., Connolly, T., et al. (2005). Letter to the Editor: Are the clinical effects of homoeopathy placebo effects? Lancet, 366, 2082.
  6. Lüdtke, R., & Rutten, A. L. B. (2008). The conclusions on the effectiveness of homeopathy highly depend on the set of analyzed trials. Journal of Clinical Epidemiology, 61, 1197-1204.