With the continuous uproar about academic dishonesty, it’s nice to report some good news, even if it concerns someone with whose views I disagree. Pro-gun Chicago academic John Lott has been widely suspected of fabricating a survey he claimed to have undertaken in 1997, concerning defensive uses of guns. He has now produced someone who claims to have been a respondent to the survey, and whose story appears credible. You can get the full story here, from Tim Lambert, an Australian critic of Lott’s work.CalPundit has more comments. In my view, unless there are new developments, the ‘case’ of fabrication against Lott must be dropped.
On the other hand, Lott’s own account shows the 1997 survey to have been slipshod in every respect, from survey design to documentation to statistical analysis. All Lott’s records of the survey were apparently wiped in a crash on his personal computer shortly after the survey had been undertaken, at a time when standard practice would call, not only for backups, but for the preservation of original interview records. And the sample size was clearly far too small to justify the conclusions Lott claimed to draw.
This survey is not of course, Lott’s most important work. The study by Lott and Mustard, purporting to show that laws allowing ‘concealed carry’ of firearms lead to a reduction in crime is much more important, and the data set for this study is publicly available. But as I pointed out in my post on data mining, with modern computer methods it’s easy to discover spurious correlations, either accidentally or deliberately. Only if you take painstaking care to record and justify your hypothesis testing procedure can the associated significance tests be taken seriously.
Given Lott’s own account of his methods, it is clear that he does not take the kind of care needed to avoid the pitfalls of data mining. Thus, his result must be regarded as suggesting a hypothesis for further testing, rather than providing substantive evidence in support of that hypothesis. The obvious further test is ‘out of sample’ testing using the same model with similar data, not used in the original estimation.
According to Tim Lambert , the work of Lott and Mustard fails this test.
In the case of Lott’s model we are in the fortunate position of being able to test its predictive power. Lott’s original data set ended in 1992. Between 1992 and 1996, 14 more jurisdictions (13 states and Philadelphia) adopted carry laws. We can test the predictive power of Lott’s model by seeing if it finds less crime in those jurisdictions. Ayres and Donahue?[2] have done this test. They found that, using Lott’s model, in those jurisdictions carry laws were associated with more crime in all crime categories . Lott’s model fails the predictive test.
Ayres and Donahue go on to examine all the states adopting carry laws using data up to 1997 and found that carry laws were associated with crime increases in more states than they were associated with decreases. They rather pointedly observe that
Those who were swayed by the statistical evidence previously offered by Lott and Mustard to believe the more guns, less crime hypothesis should now be more strongly inclined to accept the even stronger statistical evidence suggesting the crime- inducing effect of shall issue laws.
I haven’t checked these references myself, but unless Lambert has something badly wrong in his summary, I think it’s safe to disregard both the results Lott claims for his lost survey in 1997 and the statistical results of the Lott and Mustard study.