A manuscript from 1987

Yesterday I found a copy of one of the first manuscripts I ever submitted. It was never published, so it is time for it to be made available. Ideas discussed are nowadays common place. The manuscript, I think, is still mostly valid almost 30 years after I wrote it… I should have tried harder to get it published then.

Click on the image to read the full manuscript.

Nature article is wrong about 115 year limit on human lifespan – NRC

 

Leading scientific journal Nature reported on Wednesday about a maximum lifespan for humans. But are their statistics right?

Source: Nature article is wrong about 115 year limit on human lifespan – NRC

 

A small set observations with a few extreme observations plus subjective splitting of a data set into two subsets to be fitted separately to a linear regression model resulted in very clear cut conclusions and striking figures. However, none of this is solid evidence, or evidence at all supporting the paper’s conclusions. This series of articles, not only discusses the problems in the paper, but more importantly, it traces the review process that allowed it to be published in Nature.

A new analysis of the data appears in an article at “Ask a Swiss” but still based on model fitting. They detect a significant change in slope, but still we do not have confidence bands available.

 

More about P-values: what are the alternatives?

I earlier mentioned that a high-ranking journal in Psychology called “Basic and Applied Social Psychology” has banned the use of P-values. Today, I came across some additional material on this question. First of all, the controversial editorial where the decision was announced.

A paper, published in this journal, giving guidelines on the best way of presenting results without use of P-values. The paper by Geoff Cumming, titled “The New Statistics: Why and How” makes a good argument for using confidence intervals and other descriptive statistics in place of P-values.

He also has a series of videos in YouTube from which the three linked to below are related to the use (and misuse) of P-values. For my liking he does not make a clear enough distinction between the problem inherent to P-values (that they discard a lot of information to reach a true/false decision) and those problems due to the misuse and misinterpretation of tests of significance. He does mention the difference, but you need to keep your eyes and ears open to get this out of his presentations.

In addition a blog and podcast of a round table complete the discussion of this issue giving a bit wider account of the controversy surrounding the use of P-value.

 

The American Statistical Association Says [Mostly] No to p-values

Norman Matloff has published a new post after receiving criticism and comments about stating “The ASA says No to p-values” in his post I wrote about yesterday. He defends his interpretation in this new post. However, I think, the interpretation of  the statement in a context different from the “Big data” field to which he is used to does not need to always be “Says No to p-values” but instead in many cases could be “Use p-values to assess the strength of the evidence and nothing else”.  However, “tests” with binary outcomes on probabilities that are essentially continuous, will always be based on an arbitrary threshold and discard a great deal of information. Consequently to me using as suggested by Norman Matloff  “assess” in place of “test” makes a lot of sense.

The new post is at https://matloff.wordpress.com/2016/03/09/further-comments-on-the-asa-manifesto/

The American Statistical Association Says No to p-values

Norman Matloff (2016) writes in his post:

Sadly, the concept of p-values and significance testing forms the very core of statistics. A number of us have been pointing out for decades that p-values are at best underinformative and often misleading…

Source: After 150 Years, the ASA Says No to p-values | Mad (Data) Scientist


Yesterday, the statement by the American Statistics Association was published on-line in the journal “The American Statistician”. Many statisticians have been aware of the problems of significance tests for a long time, but general practice, teaching and journal instructions and editors’  requirements had not changed. Let’s hope the statement will start real changes in everyday practice.

John W. Tukey (1991) has earlier written quite boldly about the problem:

Statisticians classically asked the wrong question—and were willing to answer with a lie, one that was often a downright lie. They asked “Are the effects of A and B different?” and they were willing to answer “no.”

All we know about the world teaches us that the effects of A and B are always different—in some decimal place—for any A and B. Thus asking the effects different?” is foolish.

What we should be answering first is ”Can we tell the direction in which the effects of A differ from the effects of B?” In other words, can we be confident about the direction from A to B? Is it “up,” “down” or “uncertain”?

The third answer to this first question is that we are “uncertain
about the direction”—it is not, and never should be, that we
“accept the null hypothesis.”