What problem are you actually testing?

Every analysis answers a question… but not always the one you think. One group against a benchmark answers “Do we clear this bar?” Two independent groups answer “Which approach works better, on average?” The same people measured twice answers “Do people change when they face both conditions?” Misalign the design and the software will still spit out a p value… just for the wrong problem. Before you run anything, write one plain sentence: “I’m testing whether ______ because ______ matters.” Now try to break it. Could a different design answer it more directly? If a skeptic read your variable names, would the logic be obvious? Labels, counterbalancing, and timing aren’t clerical; they’re the meaning. Get the question right and the test becomes a formality. Get the question wrong and the test becomes theater.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.