Thursday 6 January 2022

Bernoulli's Fallacy: Statistical Illogic and the Crisis of Modern ScienceBernoulli's Fallacy: Statistical Illogic and the Crisis of Modern Science by Aubrey Clayton
My rating: 4 of 5 stars

Spotting Scientific Method on the Hoof

Will I make (or lose) any money betting heads on a coin flip 100 times? Probably not.

On the other hand, if I get a positive result from molecular genetic testing for FGD1 gene mutations, does this mean I probably have Aarskog syndrome? Almost certainly not.

The difference in these two situations is critical. In the first I already know the probabilities involved. Half the coin flips will be heads, the other half tails. In the second, I want to know the probability that I have the disease given that I tested positive for it.

My likely first reaction to the test results is ‘But how accurate is the test?’ Wrong question. Even if the test has a very high accuracy rate, the occurrence of Aarskog syndrome in the population - estimated at less than 1 in 25,000 - is so rare that it is far more likely, given no other information, that the test is wrong than I am a victim.

The overall incidence of the disease in this example is called a prior probability. Prior probability is like the knowledge that there are only two equally likely outcomes - heads or tails - in the first example. It forms the background to the practical situation. At some point in the past some researcher estimated this prior probability based on some previous guess about ever ‘more prior’ probabilities, about the incidence of the disease, perhaps through the number of published papers about it (there have been about 60 worldwide).

In other words, what we already know is extremely important in interpreting new information. The more unexpected, strange, or novel an event, the more evidence we need to take it seriously. This is a common-sensical idea but it has profound implications, one of which, according to Aubrey Clayton, is that “It is impossible to ‘measure’ a probability by experimentation.” And this is another way of saying that “There is no such thing as ‘objective’ probability.” And therefore that “‘Rejecting’ or ‘accepting’ a hypothesis is not the proper function of statistics and is, in fact, dangerously misleading and destructive.”

And yet ignoring what we already know is exactly what most researchers do, especially (but not only) in the social sciences. This mistake is not trivial. According to Clayton:
“These methods are not wrong in a minor way, in the sense that Newtonian physics is technically just a low-velocity, constant-gravitational approximation to the truth but still allows us successfully to build bridges and trains. They are simply and irredeemably wrong. They are logically bankrupt… [Since] the growth of statistical methods represents perhaps the greatest transformation in the practice of science since the Enlightenment. The suggestion that the fundamental logic underlying these methods is broken should be terrifying.”


Of course these claims will be controversial. I take Clayton as authoritative when he says that “No two authors, it seems, have ever completely agreed on the foundations of probability and statistics, often not even with themselves.” If so, this book is yet another example of the inevitable (and necessary) instability of what is casually referred to as ‘scientific method.’ If such a thing exists at all, it is demonstrated in this kind of critique of established procedures, a sort of intellectual self-immolation.

View all my reviews

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home