Eight Tiny Experiments for Honest Statistics

  1. One Big Question  
    Rewrite your study to ask a single, modelable question. If you hear yourself listing pairwise comparisons, you need ANOVA or a planned-contrast design.

  1. Define “Meaningful” First  
    Pick the smallest effect worth acting on. Decisions beat declarations; a tiny, “significant” blip may be operationally irrelevant.

  1. Visual First Pass  
    Plot group distributions and intervals. If the picture disagrees with the p-value, investigate the model, not the plot.

  1. Pre-Commit Your Forks  
    List the few follow-ups you’d run if the omnibus test is significant. Curiosity is good… pre-planned curiosity is better.

  1. Calibrate Error  
    State your alpha and your multiple-comparison plan. If you test more, protect more.

  1. Report Size and Uncertainty  
    Always pair effects (η², Cohen’s d, Cramer’s V) with confidence intervals. Magnitude plus bounds beats stars on a table.

  1. Separate Signal from Story  
    Ask: does the design support the causal interpretation you’re tempted to make? If not, write the modest story.

  1. Decision Sentence  
    Close with a commitment: “If the effect is at least M, we’ll implement Z; otherwise we’ll do Q.” Statistics serve choices, it’s not just numbers.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.