<Concluding> that the null hypothesis is false

< Previous | Next >


The expression "Concluding that the null hypothesis is false" appears to mean that the null hypothesis itself is neither false nor true until being proved," and our free will to conclude it as "false" or "true" has to be varified or falsified with observational truth.

The question of this thread is: Does "concluding" mean "(we use our free will to take the action of) concluding"?

P values indicate how incompatible the observed data may be with a null hypothesis; “P<0.05” implies that a treatment effect or exposure association larger than that observed would occur less than 5% of the time under a null hypothesis of no effect or association and assuming no confounding. Concluding that the null hypothesis is false when in fact it is true (a type I error in statistical terms) has a likelihood of less than 5%. When P values are reported for multiple outcomes without adjustment for multiplicity, the probability of declaring a treatment difference when none exists can be much higher than 5%. When 10 tests are conducted, the probability that at least one of the 10 will have a P value less than 0.05 may be as high as 40% when the null hypothesis of no difference is true.

Source New England Journal of Medicine July 18, 2019
  • The pianist

    Senior Member
    English - US
    Free will ---- yes, with all the necessary requisites.

    'Concluding' here means considering all the relevant data that you have available, AND considering all of the experience that you have in this area, you make a DECISION that the null hypothesis is false. That decision is your conclusion, and you certainly were free to make it.

    Uncle Jack

    Senior Member
    British English
    I don't see why you should invoke free will. You might falsely conclude a hypothesis is false if you use faulty reasoning, don't have all the facts, have some defect within your brain, misinterpret the data, rely on a faulty sensor, have defective vision or any number of other things.

    However, in this particular instance, the writer appears to be discussing the statistical probability that the dataset you have used as evidence to test the null hypothesis is so skewed as to give a false negative. Your reasoning is sound, you have exercised free will only to the extent necessary to come up with a null hypothesis and devise a means of testing it. Your methodology is good, there is nothing wrong with your experimental methods, but you have just been unfortunate to get a set of freakish results.


    Senior Member
    English (UK then US)
    Ah yes, that makes sense, thank you; I had not tried to fully understand the article.
    I have been following this "re-evaluation" of the advisability of using only p-values to guide "conclusions". The NEJM acknowledges this in the intro to their article

    Journal editors and statistical consultants have become increasingly concerned about the overuse and misinterpretation of significance testing and P values in the medical literature. Along with their strengths, P values are subject to inherent weaknesses, as summarized in recent publications from the American Statistical Association.3,4


    Senior Member
    English (UK then US)
    Relaxed. Because the entire scientific community, or the entire world of men is struggling with the understanding of P values.
    I think the "struggle" is not in what a p-value is or how to calculate it, but rather how much weight to put on it when advancing conclusions about a data set.

    This article is also useful for those who need to report such issues to the non-science world, covering p-values and better indicators
    Statistics for journalists: Understanding what effect size means
    < Previous | Next >