(8) Industry Bias – a New Form of Bias or an Interesting Experimenter Effect?

Before we get into the details, a few explanations for those readers who are not familiar with the terminology and context:

Bias is a technical term for saying that study results are distorted. Classically, in methodology, it is assumed that all kinds of variables can alter results.

For example, if there are more smokers, more drinkers, smarter people or poorer people in a group, this could affect the outcome and make effective interventions appear ineffective or ineffective interventions appear effective. That’s why study cohorts are created by random allocation, so that all these variables are balanced as much as possible.

Another typical form of bias is lack of blinding: for example, if patients or clinical assessors know the group allocation, then there is a chance that the assessment will not be unbiased.

Or if the study director knows that the next patient will end up in the control group, then it could be that he secretly or unconsciously ensures that the patient is delayed a little until he gets into the “right” group, for example if he is particularly fond of this patient. This so-called “allocation bias” is usually reduced by computer programmes making this allocation.

In the latest Cochrane Review “Industry sponsorship and research outcome” [1], however, a form of bias is described whose occurrence, on reflection, shakes the belief that science does nothing more than explore reality as it is:

Let’s call this new form of bias “industry bias”: namely, according to this review, studies that are funded by industry more often have significantly better results, report fewer side effects, and drugs from the sponsor are nearly 6 times as effective as drugs from the competition they are being compared to.

None of the classic types of outcome bias discussed above play a role in this new form of bias. Most of the studies that entered this meta-analysis were themselves meta-analyses of, in some cases, hundreds of individual so-called randomized trials, i.e. studies that had formed their cohorts by random allocation.

The Cochrane meta-analysis was conducted by the Cochrane Collaboration, a group of scientists who summarize the literature as completely and uninfluenced as possible without any other vested interests. The Cochrane reviews are considered the most thorough because there is a clearly defined procedure. The review is first requested, and a protocol must be submitted as to how the authors wish to proceed. It is peer-reviewed and only then can the authors proceed. Research must be complete, and the abstract also follows a tried and tested system.

So if there are any reliable results in clinical research, they are in the reviews of the Cochrane Collaboration. The reviews are known for being conservative, i.e. underestimating rather than overestimating results, because they often have very restrictive inclusion criteria.

What you need to know: here a meta-meta-study of others, a total of 48 meta-studies or meta-analyses, was conducted. The data basis is therefore individual meta-analyses, each of which often summarized several hundred studies, a total of 9,207 studies on drugs and medical devices, the vast majority of which were randomized trials (there are also some observational studies included, because this is the only way to reliably record side effects).

The authors asked a simple question: Is there evidence that studies paid for by companies report positive results and fewer side effects more often than those funded by the public sector, for example? This is important because the greater number of scientific studies are now paid for by industry. In other words, the medical-clinical knowledge we have in the majority of cases has been paid for out of financial resources from companies, which in turn can use this knowledge to make money.

There is nothing at all wrong with this, if, as we assume, the scientific methodology is objective and precisely then, if the applicable methodological criteria are adhered to – randomization, blinding, blinding of allocation, etc. – there is no difference between the results of the trials – and therefore there is also no difference between the results of studies funded by industry or by the public sector.

But this analysis shows: this clearly is not the case. Studies sponsored by industry have better results than government-funded ones 24% of the time. Industry-sponsored studies report fewer side effects 87% of the time and reach better conclusions 31% of the time. When such studies tested a company’s product against a comparator product, the comparison had a nearly 6-fold higher success rate when industry funded than when government funded.

You will now say: that’s obvious. But here you go, think about it. Because it is anything but clear. All studies are done according to the same methodological standard. You could say that the state studies are just not methodologically as good. That is rather unlikely, because such studies are usually conducted when a product is already on the market. So they have to try to methodically take up those points of criticism that have been voiced and, for example, have even greater statistical power. So they tend to have a greater chance of demonstrating effects when they are there.

And the analysis shows: the different results cannot be explained by methodological artefacts, because methodologically even the industry-sponsored ones were slightly better and statistically there is hardly any difference between the study types. So it cannot be due to methodological differences.

In my view, two explanations remain: either there is a rather large publication bias, i.e. the industry systematically keeps negative studies under wraps on a large scale. In the case of antidepressants, it was proven that one third of all results were not published [2]. It could well be that this is the case everywhere and that Ioannidis is right in his assessment that most research results are wrong [3], precisely because the negative results are suppressed. This is not the case with government-funded studies, because researchers and clients have an interest in publishing all of their data.

The second explanation would be adventurous: it would imply that the intention, the wish of the researcher or here of the client leads to the result turning out in the desired direction, despite all methodological measures to safeguard against such effects. And because such classical experimenter effects are excluded by methodological safeguards, they would have to be non-classical, i.e. perhaps even parapsychological effects.

To conclude, let us briefly consider these two options:

Publication bias would be the natural, but also extremely disturbing answer as an explanation. For it would mean that between 25% and 30% of all studies, 200 to 300 studies out of the universe of studies of interest here, have remained unpublished. And don’t forget: each of these studies costs an estimated one to several million. The consequence of this would be that one can actually trust the scientific literature only to a limited extent, and that it considerably overestimates practically all the time.

Including the public perception bias due to the greed of the press to always pounce on the first spectacular results, but not to publish corrections [4], one can assume that no publicly proclaimed information about “medical progress” can be trusted until the information has been substantiated by further replications.

On top of that, in industry-sponsored studies the potential for side effects is almost 90% lower compared to other studies, and such data usually come from very large observational studies (because rare side effects can only be reported if one documents thousands of treatments). This means that there is data falsification on a large scale.

The other option would be: the basic assumptions of the experimental model are wrong, namely that one can eliminate the experimenter – in this case the sponsor – and his intention by methodological measures (blinding, randomization, allocation concealment, blinding of outcome collection). Then we would deal with intentional or conscious influence on material systems.

Neither of these options is comfortable and, in a sense, one can choose whether one would rather put one’s faith in the soundness of scientifically generated data shattered on the rock of Scylla, the enormous publication bias, or let it sink into the vortex of Carybdis, the impossibility of keeping the experimenter’s intention out of the result of an experiment.

The publication bias can be dealt with by only allowing registered studies and checking what happened to those that were registered but not published. I would assume that this would explain part of the effect. But does this also make the non-classical investigator effect unnecessary as an explanation, for which we and others have found one or two clues [5-6]?

Maybe future generations will already be laughing at our naivety in believing that systems can be arbitrarily torn apart and still obtain valid knowledge? Maybe we should start thinking about the foundations of our world view?

Sources & Literature

  1. Ioannidis, J. P. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124.
  2. Gonon, F., Konsman, J.-P., Cohen, D., & Boraud, T. (2012). Why most biomedical findings echoed by newspapers turn out to be false: The case of Attention Deficit Hyperactivity Disorder. PLoS ONE, 7(9), e44275.
  3. Walach, H., & Schmidt, S. (1997). Empirical evidence for a non-classical experimenter effect: An experimental, double-blind investigation of unconventional information transfer. Journal of Scientific Exploration, 11, 59-68.
  4. Kennedy, J. E., & Taddonio, J. L. (1976). Experimenter effects in parapsychological research. Journal of Parapsychology, 40, 1-33.