I enjoy reading Internet debates; in particular I enjoy reading what happens when rationality and irrationality collide. The Web serves as no-man’s land in the battles between the forces of religion and atheism; science and pseudoscience; superstition and logic.
As I read and participate in more of these debates I am coming to realise that there exists a fundamental problem affecting the marshalling of evidence for deployment against the enemy. Partly a consequence of the glut of information available online, I refer to it as the Problem of Uncertainty.
The issue is most starkly observable when the argument is either between two competing scientific models or between science and a well-established pseudoscience such as astrology or homeopathy.
To illustrate the problem I will refer to the debate that occurred in the comments thread of this Independent Online article about homeopathy from August 2010:
The very first comment begins:
Dr Peter Fisher commenting on apparent success in reducing the incidence of Leptospirosis in large scale trials in areas of known susceptibility in Cuba: “This is a very large study and its results, if confirmed, have huge potential impact. We need more research into the effectiveness of homeopathic preparations in preventing infectious diseases, complications, and the economic viability of a homeopathic approach.”
1.Bracho G, Varela E, Fernández R, et al. Large-scale application of highly-diluted bacteria for Leptospirosis epidemic control. Homeopathy 2010; 99: 156-166.
That reference to a science journal looks pretty legitimate, doesn’t it? Those of us with a science background might suspect something amiss in the experimental protocol, or wonder how many people have attempted to replicate the results. We might even be dubious of the trustworthiness of the journal “Homeopathy”. We cannot, however, objectively judge the paper itself without having read it.
That may be an easy problem for many of you to redress; if you work in the sciences or otherwise have access to journal archives, you can pretty simply look the article up and analyse it for yourself. For the layperson it’s a bit more difficult. Not having access to the journal online (unless one wishes to pay $31.50 to ScienceDirect – which appears to be the going rate for Homeopathy articles – http://www.sciencedirect.com/science/journal/14754916) means traipsing to a library to look up the paper and read it, which is likely to be too much effort for someone trying to reach an informed view on the subject from the comfort of his or her own home.
This is what I mean by the problem of uncertainty. There is so much material published now, whether in reputable (or semi-reputable) journals or on the Internet, that a supposedly scientific paper title and abstract can be found to support almost any form of pseudoscience.
Compounding the issue are the occasional contributions from apparently authoritative scientists that can give a boost to the credibility of pseudoscientific principles. An example of this is the recent research carried out by Nobel Prize Winner Luc Montagnier, who co-discovered HIV, which purports to have discovered a bizarre DNA teleportation effect in highly dilute bacterial solutions (I think? – http://www.homeopathyeurope.org/nobel-prize-winner-reports-effects-of-homeopathic-dilutions).
This research, combined with Prof. Montagnier’s illustrious prior accomplishments, is being touted as proof by many supporters of homeopathy that the ‘scientific’ principles behind the pseudoscience are actually plausible.
Other debates on homeopathy that I’ve read have referred to papers or books on the alleged ability of water to maintain a ‘crystalline’ structure; the presence of ‘nano-particles’ of metals even in dilutions that exceed Avogadro’s Number (http://www.scribd.com/doc/46507497/Homeopathy-A-Nano-Particulate-Perspective); and that ice crystals grown in water exposed to a particular emotion are affected accordingly (http://www.life-enthusiast.com/twilight/research_emoto.htm).
So how can average people with an interest in science or pseudoscience but without much of a background in the subject be helped to sort the wheat from the chaff and establish which papers are bunk and which aren’t, without relying on calls to authority?
I propose the formation of a website that serves a similar purpose for science as Snopes (http://www.snopes.com) does for urban legends. Or perhaps it would be more like the Metacritic (http://www.metacritic.com) of scientific review. To begin with it would provide the following functionality:
1) Various hard data about a paper could be logged. For example, the type of protocol used (e.g. double-blinded, single blinded, statistical methods), the authors, the abstract, the number of successful replications of the experiment (with links to any papers or articles describing the replication), number of publications, number of peer reviewers.
2) Storage of reviews of each paper highlighting any potential problem areas with scope for rebuttal by the writers of the original paper – a sort of extensive peer-review system. Any reviewer will be logged and can be rated according to his or her own past reviews and past papers.
3) Data about journals, organisations and so forth with clear information about any known biases or links to organisations who might influence the articles published. Rankings would enable an idea to be formed of the reliability of any given journal.
4) An overall soundness ranking for each paper would be calculated based on an analysis of the various factors that make up the paper’s profile on the site.
I’m not saying that this would be a magic bullet that would make it simple for people to make up their minds about which papers to trust and which to shun, but I do think that a service of this kind would be a good start.
I’m sure there are many problems that would need to be ironed out and, of course, the service would have to be run in the most transparent and unimpeachable way if it were not itself to become a misleading authority on scientific papers. The idea is to provide tools to make it easier for people to weigh up a paper’s merits, not to replace the judgement process altogether.