I have an unhealthy interest in what some like to call the “pseudosciences”. Having spent quite a bit of time trying to understand this category from historical, sociological, and philosophical perspectives, I have also developed a keen interest for another category, “bad science”. Bad science and pseudoscience should not be confused with each other, however. While pseudoscience may also be bad science, most of bad science is not generally considered pseudoscience. In fact, bad science is normal. Pseudoscience, on the other hand, is defined precisely by deviating from the norm of science.
While that norm is certainly defined in part by methodological standards, it is certainly also defined by social, cultural, and historical factors. “Normal science” is what “normal” scientists do (in “normal” laboratories, “normal” universities, and backed by “normal” means of finance). Pseudoscience is what the cranks do, often in their spare time and with the backing of questionable coteries of interests.
Bad science, being normal, has the legitimacy conferred by the association with respected institutions. For this reason, bad science ought to be a much graver concern than pseudoscience. Especially to people who care about the state of science, and about the welfare of a modern society that is increasingly dependent on reliable information on crucial topics, bad science is without doubt the more dangerous of the two. How we as a society solve the energy crisis, stop global warming, cure cancer and Alzheimer, and feed 10 billion people will eventually be decided by the readers of Nature, not the Fortean Times.
It is therefore supremely important that scientific journals are reliable and bad science is kept at bay. Among scientific skeptics and professional debunkers, so much time and effort has been wasted on pointing out the obvious holes, gaps, and inconsistencies in pseudoscience. More of these should follow the example of Ben Goldacre, and give attention to the vastly more consequential problem of normal science gone bad.
Bad science is a category that covers the whole continuum from poor research methods and sloppy reasoning, through unchecked biases, to the tinkering with data and conclusions to fit the desired outcome (or the outcome more susceptible to lead to further funding), with the category of outright fraud at the extreme end. Here in the the Netherlands we had a serious case of such fraud at the end of 2011, when the respected and well-known social psychologist Diederik Stapel was revealed to have committed massive fraud throughout his career.
Cases like these are shocking – particularly to the academic communities and disciplines that they occur in. They instill a general uncertainty about published research, which undermine the system of trust among peers which the whole scientific enterprise in the end is built on. But fraud is only one very visible and extreme part of the problem, for while elaborate fraud is still an exception, bad science is normal.
I was recently reminded just how normal it is when the people at Clinical Psychology emailed me an infographic about the extent of the problem in psychological research in particular. I promised to spread it, and this post is the result. The situation as presented by the infographic looks pretty bad, and no doubt needs to be taken seriously. If only half of these findings are true, or even half of them are half-true, a serious credibility problem is rising. It cannot be ignored, and should be dealt with in the professional self-correcting manner which science ought to proceed with. The graphic does suggest three possible ways to remedy the situation and make science more honest, and though I am not entirely convinced about all these (particularly the difficult question of whether anonymous publication should be made a norm), the discussion is very welcome.
Created by: ClinicalPsychology.net
This blog post by Egil Asprem was first published on Heterodoxology. It is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.