Key mathematical discovery made vague dimensions explicit

Under the Microscope Prof William Reville Science makes extensive use of statistics in extracting meaning from complex data, …

Under the Microscope Prof William RevilleScience makes extensive use of statistics in extracting meaning from complex data, and science journals will not publish papers unless the underpinning data has been statistically processed to acceptable editorial standards.

When a paper claims to have found something, it must demonstrate that this finding is extremely unlikely to have been a fluke. The standard statistical tests for demonstrating this are being questioned, and a method called Bayes's Theorem, once popular but long neglected, is finding favour again. A readable account of Bayes's Theorem is in 25 Big Ideas by Robert Matthews (One World, Oxford, 2005).

The idea first appeared in 1763 in a paper by Rev Thomas Bayes, published by the Royal Society. Bayes's Theorem, in a nutshell, calculates how to update our level of belief in something in the light of new evidence, but this calls on us to first state our current level of belief (prior probability) based on what we now know. You can see a potential problem here - if there has been little or no previous research our current level of belief may be little better than a guess.

Bayes was unable to solve the maths associated with calculating prior probability in the absence of evidence. However, 10 years later this was solved by the mathematician Pierre-Simon Laplace who stated Bayes's Theorem in its modern form. He used the theorem to confirm the suspicion that significantly more boys were born in Paris than girls.

READ MORE

Bayes's Theorem became the standard method for analysing scientific data, and this lasted until the early 20th century when statisticians again questioned the method on the basis of subjectivity. The theorem was branded as fatally subjective and was effectively abandoned by scientists. However, in the 1980s statisticians found a way to handle the problem of prior evidence and the theorem is now recognised as the most reliable method of gaining insights from a welter of complex data.

Bayes's Theorem can calculate the odds of a thing being true in the light of the evidence. If new evidence (E) comes to light then the odds of the theory (T) being right before this new evidence - Odds (T) - must be changed to a new value - Odds (T given E) - according to the formula - Odds (T given E) = Odds (T) multiplied by LR where LR (Likelihood Ratio) is the probability of E emerging if T is true divided by the probability of E emerging if T is not true. Thus, if E is as likely to emerge if T is true as it is to emerge if T is false, then LR=1 and E has no effect on the strength of the evidence for T.

Here is an example of a probability test that can be worked out using Bayes's Theorem, but derived here using general logic (see An Intuitive Explanation of Bayesian Reasoning by E Yudowsky - Google). You have tested positive for a disease - what is the probability you have the disease? One per cent of the population has this disease. Your chances of having the disease are determined by the accuracy and sensitivity of the test, and on the background (prior) probability of the disease. It is known that the test is 95 accurate but gives a 5 per cent incidence of false positive results.

Of 100 people we expect only one to have the disease, but six will test positive - the person who has the disease plus five false positives. So, if you test positive, you have a one in six chance of having the disease.

Use of Bayes's Theorem didn't quite die out. Alan Turing and Jack Good at Betchley Hall made extensive use of the theorem during the second World War when working on the Nazi Enigma Code, to work out the most probable meaning of German text in intercepted messages. While it was derided elsewhere, Bayes's Theorem was significantly shortening the war.

In the 1920s statisticians devoted themselves to developing entirely objective methods ("significance testing") for analysing data, eliminating the need to estimate prior probabilities. In 1925 Ronald Fisher introduced the now widely used P value, which is the chance of getting the results being tested entirely by fluke. If P is 0.5 then there is a 50:50 chance the results were simply fluke. Generally P must be less than 0.05 for the results to be declared significant.

Many statisticians now gravely question the value of Fisher's methods. One eminent statistician has shown that P values can easily boost the significance of implausible results by a factor of 10. Nevertheless P values remain extremely widely used in science. Also, the appearance of total objectivity of Fisher's methods is merely an illusion and in real life analysis intuition and educated guesswork inevitably creep in. Bayes's Theorem is superior because it acknowledges this otherwise vague dimension, making it explicit and quantitative.

Bayes's Theorem is now widely used in analysing legal courtroom evidence, as it lends itself to combining different strands of evidence. Bayesian analysis shows that although the chances of a DNA match between a sample left at a crime scene and a subject picked at random from the general population are millions to one against, these are not actually the odds of a suspect being innocent in the absence of other evidence. Experts on Bayes's Theorem now frequently figure as witnesses for the defence.

William Reville is associate professor of biochemistry and public awareness of science officer at UCC - http://understandingscience.ucc.ie