The basis of all scientific work is reliable information about what we know. Those who have followed me this far have learned in the previous chapters that we
- have less certain knowledge than we think (because much of what might spread uncertainty, such as failed replications, is not published at all): Blog 16
- can’t actually name what is a scientific fact as precisely as you think because it is socially mediated: Blog 17
- shouldn’t always believe everything the majority accepts as valid (Blog 16, 17 and 18).
Now let’s turn our attention to this conundrum called “scientific information”. What exactly is it? When do we speak of information being scientifically clean or accurate? Or even “scientifically proven”? Or simply “scientific”?
In everyday language, when we talk about information being “scientific”, we usually mean “methodologically validated”, “published in the scientific literature” and therefore “credible”. “Methodologically validated” means a methodological criterion, “published in the scientific literature” means an operational criterion. And “credible” means a criterion of content. Because it is assumed that only methodologically sound results are published in the scientific literature, these are also considered credible. Because ultimately only experts or people with sufficient detailed and methodological knowledge can judge whether what has been published in the scientific literature is also methodologically clean, the rest of the world, and that often includes the rest of all scientists, relies on the quality assurance processes within the scientific literature.
We turn to these in the following:
Scientific literature and different types of literature
Original papers and other journal articles
Scientific literature comes in different formats. The so-called “original papers” are characteristic of the natural sciences and the empirical social sciences of psychology, sociology, education and economics. This refers to empirical or theoretical findings. When a research group has done an experiment, conducted a clinical or other study, or made an observation, when a researcher explains empirical data theoretically or sets up a new theoretical model that explains existing data or predicts new ones, this is called an original work. “Original” because this information is new and has not been available before. Usually, such original work is the result of research efforts initiated by researchers themselves, or in some cases, commissioned research that was suggested or paid for by someone. In any case, the information is new in the sense that this study has never been done before. It may well be that another or similar study exists with a similar or different result. Then we speak of replication. Such original work is the single currency of empirical research.
However, you could not build a science on a hodgepodge of data, no matter how good they were. They also need to be ordered, evaluated and interlinked. This is done by other types of original work: theoretical papers, overview papers that summarize a state of knowledge, analytical texts that analyse commonalities and divergences and draw conclusions or propose theoretical models. Meta-analyses summarize empirical data quantitatively, systematic or narrative overview papers in a narrative text. Meta-analyses are also usually kept as original works. This is because they produce a new form of synthesizing knowledge. All these forms of scientific literature – original empirical papers, overviews, reviews and meta-analyses, analytical and theoretical texts – contain novel information.
To be distinguished from this would be texts that comment on, evaluate or contextualize this kind of information. This occurs in the form of editorials. These are commentary texts conveying opinions by people who work as editors or reviewers in journals and write texts either on their own initiative or at the request of the editors. Many journals also contain sections for letters. Such texts are also controlled in good journals and contain either expressions of opinion on published articles – such as dissent or comment – or their own small findings, or point readers of a journal to material published elsewhere.
Overviews and book chapters
As a rule, the main activity of science – at least in the empirical sciences – takes place in journals (this is different in the humanities; there, the scientific monograph is still the main currency). This is a richly confusing terrain. For there are many journals on all kinds of subject areas. And not everything that belongs to a field is always neatly published there. Therefore, after a while, summaries of a research area are very useful, where the authors have taken the trouble to compile all available information across the various journals, with the help of various scientific databases and ideally also by asking the authors. These are then often published in turn in journals as meta-analyses, systematic or narrative reviews. They are therefore important and are among the most cited papers in the literature, because anyone who wants to get new information in an area will first get an overview for himself or herself by reading such review papers.
Very often, at a later stage, one never goes back to the originals, but only to the “state of the art” as formulated by an authoritative summary.
This task also frequently falls to book chapters. They are invariably part of so-called edited works. Here, scientists who have become knowledgeable in a field ask others to compile texts on a particular topic – summaries of empirical findings, theory, their own data, or all of the above – and thus provide a large overview on a particular topic. This can be useful for readers who cannot or do not want to delve into the depths of research, or who want to get a first impression of the state of research. It is not uncommon for such edited works to emerge from conferences or symposia to which opinion leaders or renowned researchers have been invited.
Such reviews and book chapters are thus the second level of scientific information flow.
Manual articles, encyclopaedias and textbooks
A step further from the original research process are summaries for the general or less specialized reader. These are usually found in handbooks, encyclopaedias or textbooks. Handbooks are often still written for specialized readers, but more often for those who want a more general overview, or for practitioners or scholars of other disciplines. Encyclopaedias serve to provide very general information to all possible readers, and textbooks and textbooks are introductions for beginners who want to familiarize themselves with the state of the art in a field.
The complementarity of precision and comprehensibility and the half-life of knowledge
We can conclude from the above: original papers are very close to the research process. Therefore, they are often very difficult to understand, but they are usually precise and accurate. Mostly they make a lot of assumptions and are therefore difficult to understand for the non-specialist. For example, one has to have a lot of methodological knowledge, background knowledge about the researched topic, prior knowledge that is only provided to a limited extent because it is taken for granted. Precision and comprehensibility are usually mutually exclusive in an original article because of the limited space available and because the author of an original paper wants to communicate a certain kind of new finding and therefore conveys context and background only to a certain extent. This is why I say that comprehensibility and precision are complementary. They are so only in a pragmatic sense. For in principle, comprehensibility and precision could be achieved together, but only at the price of very detailed presentation, which is out of the question for journal articles.
However, at the level of original papers, at least in the life sciences, the following often happens: Findings become outdated, are contextualized by others, are relativized by counterfindings, are qualified by later studies as having only limited validity, and so on. In other words, the level of original findings changes its structure rapidly. The half-life of knowledge, for example in the medical field, but also in psychology, is short. What is published in an article from 2010 is not necessarily true today.
There is therefore another complementarity: that between comprehensibility and validity. The more comprehensible texts – textbooks, book chapters, encyclopaedias – often convey knowledge that is already outdated by the time they are published. It is true that the texts are easy to read. But their content is not necessarily as reliable as that of original works.
The validity of knowledge and peer review (and what it fails to do)
One of the qualifying features of scientific literature is the quality assurance that so-called peer review is supposed to provide. What do we mean by this? Scientific journals are divided into those that are peer-reviewed and those that are not. The latter are mostly editor-only journals, where the editor alone decides what is published and what is not. These are not necessarily bad journals if the editor or the editorial board are knowledgeable. Many historically important works – such as Einstein’s work on the theory of relativity – were published in such journals, and in general the texts of scientific discussions in earlier times were published exclusively in journals of “learned societies”. Anyone who had something new to say could publish there. Because the number of those who want to publish is large and the space in the “good” journals is limited, there is a selection that is harder the more prestigious the journal, at least in the life sciences.
But how is the decision made? The first line of decision is made by an editorial board. Depending on the journal, an academic part-time, full-time or publisher’s editor checks the texts received for suitability. This suitability check often has nothing to do with content or potential significance, but often simply answers the question: is our readership likely to be interested in the information presented here? Does it fit into our magazine? If yes, the submitted article gets passed on and is sent into the review process. Depending on the journal, there are 2 to 4 or even more reviewers who read and comment on the text. Depending on the journal and the editor, the reviewers must agree in their vote or a clear majority vote must emerge, which the editor usually agrees with, but does not have to. Often the reviewers have requests for changes, which the authors then have to take into account in order for the paper to be accepted and published.
From this, one can already see what this peer review process can and cannot do, and also what it prevents. The peer review process ensures that submitted papers have not made any gross errors – of a methodological nature, for example – that they are reasonably comprehensible (because reviewers are test readers), that they report findings that are currently communicable in the scientific community. The review process thus prevents false negatives, statistically speaking, it keeps the alpha error small.
The peer review process cannot ensure that the information is important, that it is relevant, that it goes further. And the review process certainly cannot ensure that something very important has not been overlooked. This is because it is usually filtered beforehand: papers that are rejected in advance by the editorial office “because they are not of interest to the readers of this journal”, “because they contain information that may not correspond to the current state of knowledge”, or whatever all the rejection reasons are, could in principle hold fascinating information that will never be seen by any reviewer. If authors are very persistent, then it may well be that such information ends up somewhere in the scientific literature, but just not in the top journals, but somewhere on the fringe.
This means: The scientific quality assurance process is conservative. It guarantees that not much that is wrong will be published. But it does not guarantee that everything that is important and exciting will be published. And it certainly does not guarantee that what is not published is actually irrelevant or wrong. Because here, too, the collective element comes into play: only what is majority-worthy is published in the mainstream journals. The rest appears marginally or not at all. In other words, peer review is primarily an instrument for maintaining power and securing the status quo. This is also now recognized by research on the subject [1-3]. Peer review can ensure the basic validity of results and the social communicability of findings. But no more, and even that only with limitations. For reviewers, too, are human beings with preferences and prejudices, with preconceptions and limitations.
Peer review doesn’t just happen with journals, though. Good science book publishers – the “University Press” publishers and the big science publishers, such as Springer, Elsevier, Wiley, Blackwell, MIT Press – also safeguard themselves through peer review. However, one must bear in mind the following: reviewers of journals work for free. It is a service to the scientific community, so to speak, and at best you get a non-material point if you have been a reviewer for a journal. You can intervene a little in the scientific process and somehow feel important. But for reviewing books you usually get something – money, book vouchers, not much, but still. However, reviewing books is usually very time-consuming. So this is usually done by people who have a lot of time on their hands, who are at the beginning of their career, who want to do an editor a favour, or who have an interest in the subject.
Other edited works or textbooks are usually not reviewed, or only by series editors. Here, then, the expertise of the publishers’ editors stands above all as a guarantor of the usefulness of, say, a textbook, or the editors engaged by the publishers.
But what the peer review process absolutely cannot do is this: We often make the erroneous assumption that what is not scientifically published, i.e. does not appear in the scientific literature, is also non-existent, irrelevant or wrong. Because, so the implicit assumption goes, if it were, then it would have been found or published somewhere. As the brief outline above should make clear, however, we can note: Neither the scientific process nor the quality assurance process in the publication process can do this. The scientific process is socially structured and follows what is majority-worthy and interesting for opinion leaders. And the publication process reflects this. At best, it ensures with a conservative touch that nothing is published that contains blatant errors of a methodological nature or that is incomprehensible. And not even that is guaranteed.
Example – The measles virus process and the literature on which it is based
In my blog, Part 17, I had used the measles virus process as an example of the social nature of a scientific fact, here the measles virus. We recall: Dr. Lanka had offered a prize of €100,000 to someone who could bring him a scientific publication that proved the existence of the measles virus in a scientifically impeccable way. Dr. Bardens had accepted the challenge and brought 5 publications. Because Lanka did not pay, he sued him. He won in the first instance. I then looked at the papers and was surprised to find that Lanka’s challenge was not as naive as it looked at first sight. For the 5 publications all built on each other, and in such a building the coherence of the very first is the cornerstone that holds everything. If this collapses, the rest is wastepaper.
I had pointed out that the methodological soundness of this first paper was not so impeccable, although published in the scientific peer-reviewed literature. I had also pointed out that later papers assumed the validity of this first paper and simply disregarded the cautionary formulations and contingencies that could be inferred from the first paper.
Interestingly, Lanka has now won on appeal. The Stuttgart Higher Regional Court has rejected Barden’s claim. Lanka does not have to pay. The claim that there is no scientific work that proves the measles virus conclusively still stands unchallenged, and now relatively strong.
What has happened here? Apparently, a very first paper did indeed bring to light an interesting finding that, in the climate of research at the time, seemed to prove what everyone expected, namely a specific causative agent of measles. The initial review process was positive, and the paper was published. The fact that the paper contained many “ifs” and “buts”, cautionary phrases and admonitions, is ignored just one or two years later. All that is needed is a reference to the published work and a fact is created. Majority pressure is created, and alternative ways of thinking or approaching things disappear. So scientific literature also has something of a railway track: it tracks thinking and research in a certain direction. For those who read it see what is “publishable”, i.e. what is suitable for the majority, what “people” want to hear and read at the moment. And suddenly everyone is running and looking in only one direction. So scientific literature, although quality-assured, is not error-free and not infallible.
Popular literature and Wikipedia
Often students cite popular literature or Wikipedia. Why is this problematic? We saw above that the validity of academic literature is already limited and its reliability should be treated with caution. This is even more true for popular literature and for Wikipedia as well. But for different reasons.
Popular literature, for example in the form of non-fiction books or articles from popular magazines, even if they are products of the so-called quality press, is primarily not scientific, but written by journalists for the public. Journalists often understand a bit more than others about the matter, but they too have limits – in terms of the time they can spend on understanding a matter, or in terms of their impartiality.
And journalists are always under the dictates of the editorial board, which has political-social interests above all. Either it has a macro-political interest dictated by the owners or editors, or a micro-political interest dictated by advertisers and their sentiments. In either case, popular literature from periodicals can usually only serve to locate the original sources and to get a glimpse of them for oneself.
The same applies to popular magazines such as Psychology Today or Brain Research and the like. At best, they are good as a quarry to make it easier to find the originals, but not as a scientific source. The information is usually too pre-chewed for that. Popular non-fiction should also be treated with caution, unless an author summarizes in his or her own simple words the findings that he or she has published in the scientific literature. As soon as another author takes over this task, caution is advised.
Wikipedia is a thing for itself. The idea(l) of a freely accessible online encyclopaedia, which anyone can contribute to and which is thus always up-to-date, is of course terrific. I myself like to use the online encyclopaedia in all cases where simple and relatively uncontroversial factual information is needed – life data of people, brief initial information on content, facts on biological or biochemical or similar facts, flags or gross domestic products of countries, population figures of cities, etc. Whenever evaluation issues come into play, caution is advised.
Because while in a classical encyclopaedia the expertise of the requested authors and the editors somewhat guarantee the quality of the information, this is abundantly unclear on Wikipedia. It is true that assertions must be backed up, usually by references. But what happens when the reference is dubious, or when citations are circular, for example when a reference refers to an article that in turn refers back to another article that should actually contain the information but does not?
It is precisely in the case of contentious material that it becomes apparent that the anonymous structure of authorship on Wikipedia tempts ideological dispute to be carried onto an open platform under the banner of supposed factuality. A good example is homeopathy. The mainstream – and with it Wikipedia – give the impression that everything is clear: homeopathy scientifically proven unscientific, full stop, underlined, done. A doctoral thesis by Marius Beyersdorff , which we prepared, analysed how controversial all this is and how strongly Wikipedia in particular is distorted by discourses of power. Beyersdorff recorded more than a thousand edits and many closures – so-called “edit wars” in which opponents of opinion fight and delete each other until the article is blocked. And here you can see: in the end, there are a few active authors who do nothing else almost around the clock but try to impose their opinion, for which each side can offer good arguments and equally good references.
Recently, filmmaker Markus Fiedler  presented a film about “Die dunkle Seite der Wikipedia (The Dark Side of Wikipedia)”. Because he discovered that he was suddenly insulted and reprimanded in the worst way when he wanted to clarify a fact. From this it follows: whenever it comes to facts that are not completely uncontroversial and politically unobjectionable, Wikipedia is a questionable source. It can certainly serve to advance to the sources, which are, after all, at least partially cited. But it cannot serve to stock the scientifically clear and incontrovertible knowledge. Because the background structures of those who have the power at Wikipedia to define knowledge are not transparent enough.
The latter also tends to apply to scientific publications in the proper sense. But here one could at least demand clarity if necessary, and this has already happened many times in appeal proceedings. On Wikipedia, it is unclear what competence the authors of articles have, and it is not uncommon for people whose professional biographies have stalled elsewhere to seem to derive prestige from being authors and editors on Wikipedia .
The hidden hierarchy of scientific information
Often one reads in newspapers that a paper has been published in a “renowned” or “high-ranking” journal, or scientometric characteristics such as the “impact factor” of a publication, or even of an author, are cited. What is behind this?
Rankings, hierarchisations and evaluations seem to be implicit in the human social fabric. The same applies in the context of scientific information. It is also easy to see: our reading time is limited, and everyone tries to get the information that is most important to him or her. How do we do that? We pick up sources of information that we believe contain the most important information and trust that other, or secondary processes hidden from us, will ensure that they pre-filter the information accordingly.
Therefore, sources of information, which in the past were known for carrying important, correct and significant information, are becoming more popular and are being approached by more and more potential authors to publish their findings. Since space is limited, the editors pick and choose. They choose what they think is most important to their clientele and what they like to see. This is how it happens that in the medical field, journals like the New England Journal of Medicine, the Journal of the American Medical Association (JAMA) or Lancet become the flagship journals. These then only publish certain types of studies – currently, for example, large randomized clinical trials or very large, long epidemiological studies – that are of interest to a large general readership.
In the general scientific field, it is then journals such as Science or Nature with their respective dependency journals that fulfil this function. Below them are other journals that, for example, serve as a mouthpiece for large professional associations. Because these journals have a large readership worldwide, the papers published in them are also widely read and cited more often than others.
This means that a paper achieves a high “impact”. This is because the impact of a journal is calculated from how often publications in one journal are cited by papers in other journals, measured by the number of papers published there. An impact factor of 10 means that every article published in a journal in the last two years is cited on average ten times by other publications. For an essay to be cited a lot in a smaller journal, it not only has to be very good, it also has to strike a chord, i.e. it has to be found, read and cited via search strategies.
Therefore, at first everyone wants to publish in the so-called high-impact journals if possible, which results in a fatal effect: one researches, writes and publishes what one thinks (or knows from experience) that these journals will publish. This ultimately amounts to self-censorship on the part of the scientific community. It tends to promote conservative-consensual thinking and runs counter to that anarchist-critical impulse that has actually always been the driving force of scientific advancement .
So the much-cited reputation of journals or authors comes from the fact that they are much read and much cited by others. That is certainly not a bad thing. But it is also no guarantee of goodness or originality. It merely says that someone can publish material that is interesting for the mainstream of science and is received there. The history of science is full of examples of groundbreaking discoveries being made not at the centre but at the margins of science . Leibniz, who is dead for more than 300 years and is now considered one of the greatest polymaths of modern times, was never academically at the centre of things during his lifetime and was eyed with scepticism by the Royal Academy and at times also by the French Academy .
The examples are countless. They are meant to say: Being able to publish well within the mainstream, or being published in the mainstream, is nice for authors and shows the reader that the issue is reasonably uncontroversial. But it does not mean that the issue is important, correct or significant. And the fact that something is not discussed within the scientific mainstream does not, conversely, mean that it is uninteresting, wrong or refuted as a topic, as is often assumed. This is only the case if science has once dealt with an issue very thoroughly and over a long period of time, and has clearly refuted assertions. But because science mostly strives to find and prove something in a positive sense, the amount of all the things and facts that could also still exist and about which science has not yet made a definitive statement is countless.
Frequently, however, adherents of fringe theories make the mistake of thinking something is “scientifically” proven if something is published in some scientific journal. As my blog No. 17 has shown, this is not tenable. For “scientifically proven” does not mean published in a scientific journal, but “accepted as indisputable by the majority of the scientific community”. And these are quite different things. Conversely, something can be “scientifically” accepted, for example if it is published in mainstream media and thus appears to have been purified by majority blessing, and yet still be questionable in a scientific sense. This includes, for example, a large part of psychopharmacology. It is part of the major guidelines, is practised everywhere, is published in the best journals. And yet the discussion is reopened as soon as really knowledgeable and renowned authors publish substantial criticism that is heard and taken note of, as recently Peter Gøtzsche did in his new book .
So we see: Scientific information is less certain than we usually think, and scientific findings are less fixed than they seem. This lies in the nature of things. For one of the defining process features of science is scepticism, even of its own findings and of facts previously thought to be certain. Therefore, scientific information can only ever be defined operationally and formally, but never in terms of content.
In conclusion: Ad Fontes
I don’t know how many citations I have traced in my career and ended up in the orcus of information. An estimated 30% of all the sources I have followed up were either misquoted or did not contain the information that was given. I still remember Marcello Truzzi, the founder of the “Sceptical Inquirer”, who said in a lecture at the NIH, I think in 1995, that Harvey’s discovery of the heartbeat was met with laughter and criticism throughout Europe and that no one believed him. I found this claim so outrageous that I asked Truzzi for his sources. He sent them to me. I followed them. They all led nowhere and were either false or irrelevant. But Truzzi was right, as I then found out when I searched for myself: Emilio Parisano, the spokesman of the European medical profession, had really written that “no one in Venice can hear a heartbeat” . However, this was not in any of the sources Truzzi had given me, it was only in the one I had looked for myself. Truzzi was a good scientist. Perhaps he just made a mistake in his haste.
But one can learn from this: Whenever something is important, one should not settle for tertiary or secondary sources, but go to the primary sources. These are the original studies, the original citations, the original authors.
In naturopathic circles, for example, the quote attributed to Hippocrates is popular: “Let your food be your medicine, let your medicine be your food.” I don’t know how many lecture slides, papers or anywhere else I’ve seen this quote. Because I wanted to use it the other day, I had an assistant look for it, for a whole day: right through the Corpus Hippocraticum, up and down, and didn’t find it. Presumably it is a compilation of Hippocratic teachings according to meaning, but not a saying that can be found literally like that.
Thus: Ad fontes – to the sources. Whenever possible, read original, trace quotes and distrust all secondary colporteurs. Always, everywhere. This of course includes me, as well.
- Hopewell, S., Collins, G. S., Boutron, I., Yu, L.-M., Cook, J., Shanyinde, M., et al. (2014). Impact of peer review on reports of randomised trials published in open peer review journals: retrospective before and after study. British Medical Journal, 349, g4145.
- Lee, C. J., Sugimoto, C. R., Zhang, G., & Cronin, B. (2013). Bias in peer review. Journal of the American society for Information Science and Technology, 64, 2-17.
- Ritter, J. M. (2011). Impact, orthodoxy and peer review British Journal of Clinical Pharmacology, 72, 367-368.
- Beyersdorff, M. (2011). Wer definiert Wissen? Wissensaushandlungsprozesse bei kontrovers diskutierten Themen in “Wikipedia – Die freie Enzyklopädie” – Eine Diskursanalyse am Beispiel der Homöopathie. Berlin: Lit-Verlag.
- Pörksen, B. (2015). Wo seid Ihr, Professoren? Die Zeit(31),
- Fischer, K. (2006). Aussenseiter der Wissenschaft: Besichtigung einer Lebenslüge kollektiv organisierter Wissenschaft. Forschung & Lehre, 10, 560-563.
Silver, R. B. (Ed.). (1997). Hidden Histories of Science. London: Granta Books.
- Antognazza, M. R. (2009). Leibniz: An Intellectual Biography. Cambridge: Cambridge University Press.
- Gotzsche, P. C. (2015). Deadly Psychiatry and Organised Denial. Copenhagen: People’s Press.
- Parisano, E. (1647). Recentiorum disceptationes de motu cordis, sanguinis et chyli. Leiden: Ioannis Maire, p. 107