Download this article
- 674 article downloads
- 1,858 complete issue downloads
- Total: 2,532
In their article “One Swallow Doesn’t Make a Summer: New Evidence on Anchoring Effects,” Zacharias Maniadis, Fabio Tufano, and John List (2014) present a framework for statistical inference. Presenting evidence from a simulation, the authors suggest that a decision about whether to call an experimental finding noteworthy, or deserving of great attention, should be based on the calculated post-study probability and not on the Classical significance test. In their simulation, it is assumed that the unconditional probability P(H0) is known. The assumption restricts the conclusions to the following: (i) if a non-negligible difference between the post-study probability and the significance level is found then only post-study probabilities offer proper inference; (ii) if a negligible difference is found then both approaches would more or less offer proper inference, though post-study probabilities would be slightly more accurate. In this comment I raise some concerns with their simulation. First, if P(H0) is unknown, as is often the case with economic applications, the post-study probability can lead to even worse inference than the Classical significance test, depending on the quality of the prior. Second, the simulation in Maniadis, Tufano, and List (2014) ignores previous assessments of P(H0) and instead utilizes a selective empirical setup that favors the use of post-study probabilities. Such utilizations might illustrate the pitfalls of the Classical significance test. But, contrary to what Maniadis, Tufano, and List argue, their results do not allow for drawing general recommendations about which approach is the most appropriate.