Read this article
- Access statistics
- 2,609 article downloads
- 2,170 complete issue downloads
- Total: 4,779
Research on financial misconduct uses data on enforcement outcomes, such as the penalties that the firm pays. The distribution of most enforcement outcomes shows extreme observations or outliers, but you wouldn’t necessarily glean that by a casual examination of some of the leading research on financial misconduct. In this paper I describe the public-policy context of such research and raise the issue of whether the extreme-values problem has been given adequate attention. I touch on a number of papers, and then focus on a 2018 Journal of Accounting Research article by Andrew Call, Gerald Martin, Nathan Sharp, and Jaron Wilde (CMSW), which purports to show a positive association between the severity of enforcement outcomes and the involvement of a whistleblower. I show that the top one percent of the enforcement outcomes (11 observations) in CMSW’s large sample of 1,133 enforcement actions drive their results; a number of robustness checks suggest that the extreme-values problem is serious. Moreover, I explain numerous sources of fuzziness in their whistleblower coding, and explain that research with the extreme-values problem is highly sensitive to such fuzziness because a few dubious codings can change the results. Unfortunately, CMSW do not disclose how each coding was arrived at, so we cannot peer into the fuzziness to see how the extreme observations came to be coded as they are. I suggest that CMSW could have been upfront about the looming problem of a few extreme values in enforcement outcomes, should have shown how the outliers affect their results, and should have explained and resolved (to the extent possible) the mysteries surrounding the coding. Furthermore, as regards the larger issue of public policy, it should be emphasized that we would expect strong and natural correlations among severity of misconduct, likelihood of penalties, and whistleblowing, like the correlations among the severity of health emergencies, likelihood of medical interventions, and calls to 9-1-1. Correlation is not causation. CMSW do not give this point the emphasis that it deserves. It looms especially large in light of my findings that CMSW’s results are not robust to removing the outliers. Accordingly we should perhaps be surprised that CMSW did not find a statistically significant correlation for one of the enforcement-outcome categories.
Response to this article by Andrew C. Call, Nathan Y. Sharp, and Jaron H. Wilde: A Response to “Are a Few Huge Outcomes Distorting Financial Misconduct Research?” (EJW, March 2019).