Tuesday, October 16, 2007

The Winner!

We have a winner for the "Michael Bellesiles Fraudulent use of Statistics Award." For those who don't remember, Bellesiles won the prestigious Bancroft Prize for history based on his "Arming America," which sought to prove that, contrary to popular opinion, our Founders were not actually likely to be armed.

Trouble came for Bellesiles when enterprising gun owners looked at his sources, and found that they generally said the opposite of what this author claimed. Bellesiles was even caught trying to "channel" probate records that had been burned as a result of the 1906 San Francisco earthquake, and claimed that his data had been destroyed in a nonexistent flood in his office building. He was stripped of the prize, and resigned his tenured post at Emory.

Sadly, the competition for the Bellesiles award is fierce, and portions of this are discussed on www.junkscience.com and elsewhere. Keynesian economists made a strong argument for the award by claiming, contrary to evidence, that the Depression was caused by tight money policies; the opposite is closer to the truth. Advocates of global warming certainly also made a case that they, too, deserved this award.

However, only one can win, and this year's Bellesiles award belongs to Planned Parenthood's Guttmacher Institute, the World Health Organization, and Lancet for their study on the correlation between anti-abortion laws and abortion rates. The study starts by using the wrong units; abortions per live birth, instead of abortions per sexually active woman of childbearing age. Any correlations found disappear once the correct units are used. Going further, it overestimates the U.S. abortion rate by about 30% (33/100 births instead of accurate ~25/100 births), ignores the fact that there are other huge factors involved, and claims to be able to accurately measure the rate at which illegal abortions are performed (I'm sure there's no reporting bias there!) in developing countries.

My guess is, and it's partly supported by Guttmacher's own data presented in the article, that they came to the exact opposite conclusions of where the data led. Just like Bellesiles, they are a worthy winner of this award. They used the wrong units, appear to have falsified data, and have assumed credibility in data where none ought to be assumed.

Again, if you think peer review guarantees quality, you are highly mistaken. Peer review most strongly guarantees conformity, whether that conformity conforms to reality or not.

No comments: