Friday, May 05, 2006

Size Matters

First of all, McCloskey and Ziliak deserve a lot of credit for their thorough work on this topic.

The last time Steve Ziliak was at GMU, he told a statistical fable. Suppose, he says, you have two weight loss drugs: Precise and Oomph. The makers of Precise guarantee a total weight loss of 10-12 pounds if you use their product. Oomph's makers guarantee a total weight loss of 12-40 pounds if you use their product. Which product do you use? Which drug is more effective?

Obviously the answer depends on just how much weight you want to lose. But suppose you are 50 pounds overweight, then the answer is Oomph, right?

The problem with statistics in the social sciences (and as McCloskey and Ziliak show in their forthcoming book, this is also a problem in medicine and the hard sciences), is that the standard by which effectiveness is judged is statistical significance -- and statistical significance is much more a measure of statistical precision than it is of oomph, power, etc. So in a typical study, Precise would get an asterisk of being proven effective, because its effect is statistically significant, whereas Oomph, with its high variance, may not even show up as statistically significant. Yet it is clearly a more powerful drug.

Such is the case with many economic studies, including this one by Thomas Dee, blogged on by Greg Mankiw at Harvard. Dee seems to be resting a big part of his case on the finding that a teacher's gender affects a student's achievement by 0.04 standard deviations, i.e. 4% of a standard deviation. Not 4%, but 4% of a standard deviation. I'm sure, because he has a nice large sample, that Dee's finding is precise. But it has about as much oomph as a 1987 Toyota Tercel with flat tires and a worn clutch.

Dee is doing very interesting work on a variety of issues in public economics, and on education particularly. But he needs to stop making a big deal over results that merely show statistical significance.