Null effects are fine, but you need to discuss power!

I like the way that a lot of social sciences are starting to push for publishing more studies that tried something plausible and then found nothing.  Null effects papers tend to be difficult to publish, which leads to publication bias, meaning you’re more likely to find something spurious than to not find something.

BUT.  One almost sure-fire way to find no effect is to have a sample size that is too small to pick up an effect.

If you find no effect, you need to discuss sample size.

If you find no effect from an experiment, then you really need to talk about the power analysis that you did *before* you ran the experiment that shows the sample size you would need to find an effect size.

And if you have a large magnitude that just isn’t significant, that isn’t as convincing as having an insignificant small magnitude or, even better, a small magnitude that flips sign depending on specification.

As the great Dierdre McCloskey says, statistical significance is not the same as oomph.  Or as I tell my students, meaningful significance is not the same as statistical significance.

A true null effect is something that has a small effect size, whether or not it is significant.  And if you find an insignificant null effect, then you have to discuss whether this is a true non-finding or if you just didn’t get enough observations.

Got that?  Null effect = fine, but it has to be a real null effect and not just a bad study.

3 Responses to “Null effects are fine, but you need to discuss power!”

  1. bogart Says:

    Preach it, sister!

  2. CG Says:

    You sound like an editor I know in my field!


Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.