By Eric Holthaus
A new study out on Monday makes an audacious claim: Hurricanes can be made safer just by changing their names. If you haven’t seen this headline yet, I defy you to guess the reason.
Go on …
OK, fine. I’ll tell you, but you won’t believe me. Published in the Proceedings of the National Academy of Sciences, the study alleges that hurricanes with female names are more deadly than those with male names because—get this—people don’t take them as seriously. It’s a story that’s quickly rocketed to the front page of /r/nottheonion, where the discussion surrounding it is priceless.
Except there’s at least one major flaw in the study. From Ed Yong at National Geographic:
But [National Center for Atmospheric Research social scientist Jeff] Lazo thinks that neither the archival analysis nor the psychological experiments support the team’s conclusions. For a start, they analysed hurricane data from 1950, but hurricanes all had female names at first. They only started getting male names on alternate years in 1979. This matters because hurricanes have also, on average, been getting less deadly over time. “It could be that more people die in female-named hurricanes, simply because more people died in hurricanes on average before they started getting male names,” says Lazo.
Whoops. That’s a pretty basic error to make in a study where you’re trying to correlate deadliness of something over time. Actually, when the authors did attempt to account for this by comparing only storms after 1979, as you might expect, any correlation between names and deadliness vanished. Ideally, to back up a claim like this, you’d want to have lots of data, and there simply haven’t been enough years of named hurricanes to get a sufficient statistical significance.
To test my hypothesis that there isn’t enough data for the authors to make the claim that the gender of storm names is in any way related to how deadly they are, I used the authors’ own data (.XLS) to figure out what would happen if I removed the single remaining deadliest storm from their post-1979 dataset, Hurricane Sandy. (The authors had already removed Hurricane Katrina and Hurricane Audrey of 1957 for similar reasons.) While we may think of the name Sandy as a bit gender-ambiguous, the authors categorized it as very feminine—a 9.0 on an 11.0 scale.
Here’s the correlation between the authors’ own “Masculinity-Femininity Index” (which qualitatively ranks names on an 11-point scale according to gender) and number of deaths for each of the 52 storms that made landfall between 1979 and 2012.
With Hurricane Sandy:
Without Hurricane Sandy:
Singlehandedly, Hurricane Sandy switches the authors’ entire premise on its head. Ignoring Sandy’s outlier nature, male-named hurricanes now cause more deaths than female ones. Harold Brooks of NOAA has performed a similar analysis on this data (removing Sandy) with similar results, which he shared as a comment on Yong’s blog post.
The authors conclude: “Although our findings do not definitively establish the processes involved, the phenomenon we identified could be viewed as a hazardous form of implicit sexism.” The authors have also responded to Yong’s criticism on his blog post:
Although it is true that if we model the data using only hurricanes since 1979 (n=54) this is too small a sample to obtain a significant interaction, when we model the fatalities of all hurricanes since 1950 using their degree of femininity, the interaction between name-femininity and damage is statistically significant. That is a key result. Specifically, for storms that did a lot of damage, the femininity of their names significantly predicted their death toll.
Is this a statistical fluke? Lazo says, “It could be that more people die in female-named hurricanes, simply because more people died in hurricanes on average before they started getting male names.” But no, that is not the case according to our data and as reported in the paper. We included elapsed years (years since the hurricane) in our modeling and this did not have any significant effect in predicting fatalities. In other words, how long ago the storm occurred did not predict its death toll.
My suspicion is that this study is a classic example of confirmation bias: The authors likely knew what result they were going for when they set out to do the study, and sure enough, they found it.