I have just finished an econometrics replication project using STATA. I’m not going to identify the paper I replicated or its authors in this post because I have not yet contacted them for clarity. But my basic finding was this: The authors misrepresented the statistical significance of their findings, possibly with an eye towards publication (which might not have happened if they reported the true findings).
It all started like this: In my Advanced Statistical Techniques course I was tasked with finding a recently published study, contacting the authors for their raw data and .do file (an ASCII file containing STATA commands—basically their exact statistical methodology), and copying the study to search for inconsistencies or methodological weaknesses.
The study uses rare events, usually therefore coded as binary variables or categorical variables instead of actual, precise values. Accordingly, the authors ran two different regressions: a logistic regression and a rare event logistic regression. Finding the two were comparable, the authors felt comfortable reporting the logistic regression.
I received the data and .do file within 24 hours of requesting it, and I got to work reading their paper to get a good understanding of the theoretical backbone of their arguments. Then last week I ran their regressions, and I discovered possible unethical reporting.
The authors did not report their p-values directly (info here for readers not familiar with statistical analysis), which is common practice. Rather, they reported the level of statistical significance. A p-value of 0.05 is said to have strong statistical significance. In other words, if we ran 100 samples, we should expect to see the same results 95% of the time. A p-value of 0.001 means we should expect to see the same results 99.9% of the time. A p-value above 0.05 has weak statistical significance. The authors reported the independent variable’s level of statistical significance with asterisks next to the coefficients (again, standard practice). In order to keep track of their findings, I copied the claimed p-value significance into an Excel spreadsheet so I could quickly test their findings against my own.
When I filled in the spreadsheet with my own results, the following caught me off guard. Check out the red cells:
At first I thought it was an honest mistake—that the authors accidentally reported their rare events logistic regression instead the standard logistic regression. But this was not the case. Anyway…
Two things become clear: 1) in many instances the reported (or claimed) correlations between the independent variables (IV) and the dependent variables (DV) are higher than the actual correlations, and 2) in many cases the misrepresented p-values are too high to infer statistical significance. In fact, these weak p-values might have made it more difficult for the authors to get published because it’s less meaningful unless statistical significance can be shown.
One of the great things about the online atheist (and other non-believing freethinkers) community is this propensity towards embracing the scientific method to make inferences about the world. This method is much better than theory alone, wishful thinking, and especially theology. But when we read scientific papers, please bear in mind that 1) the authors could be wrong, and 2) the authors could be dishonest.
The problematic reporting I discovered in my project does not mean the authors were necessarily mistaken about their claims. More observations might yield stronger p-values. Indeed, some of their findings are just south of significant. But it does mean they were probably dishonest about the level of confidence they have about their claims. And to me this is unacceptable. Unless someone like me came along and actually checked their work, their paper will serve as the theoretical backbone for another research project, and future researchers won’t even know how weak their findings actually are.
So never take your skeptic’s hat off, even when reading articles by heavy-hitting scholarly researchers.