My university is considering a campus-wide smoking ban, justified in part by the claim that second hand smoke kills more than fifty thousand people a year. I am generally suspicious of claims of that sort, so have been trying to explore that one. It turns out that it is a misstatement of a claim in a 2005 report from the California EPA. In that report 50,000 is the midpoint of a range of possible values. In the justification for the proposed ban, it has been converted to a lower bound.
More interesting is the question of where the number comes from. Reading the 2005 report I was unable to answer that question. The number also appears in a Surgeon General's Report, but reading that it is reasonably clear that it is simply repeating the CA EPA figure, not offering an independent estimate.
How could such an estimate be made, given the obvious problems in arranging controlled experiments on the effect of potentially lethal pollutants? One way is by using natural experiments. There have been a number of studies which looked at a city that imposed a smoking ban, compared heart attack death rates after the ban with death rates in comparable cities, and reported a surprisingly rapid and large effect.
There is a problem with that approach. Heart attack deaths in a single city vary randomly—with or without smoking bans, they sometimes go up and sometimes go down. If you want to argue that second hand smoke causes a lot of heart attacks, all you have to do is to find one city where a ban was followed by a decline and report that result. Given the pressure for anti-smoking measures, there are incentives for academics to do so. And even if the researchers are honest and pick their city at random, the study may be more likely to be completed and published if it gets a striking result, especially one that fits what many people want to believe.
One can always find possible problems with studies of controversial issues, especially ones that produce results one does not want to believe, but in this case there is at least some evidence. A 2009 NBER study
analyzed all of the data and concluded that there was no effect from smoking bans—cities where the ban was followed by a decline in heart attack deaths were about as common as cities where it was followed by a rise. If that result is correct, it strongly suggests that the conventional view is the result of cherry picking the data.
Which fits my suspicion of scientific "facts" asserted in political controversies, especially ones supported mostly by the fact that authorities such as the Surgeon General's Report and the CA EPA say they are true. It also fits my more general suspicion of the too popular idea of Official Scientific Truth, to be established by consulting the Official Scientific Authorities rather than by looking at arguments and evidence.
Of course, I too have my biases—not with regard to smoking, since I'm a non-smoker, but with regard to Official Scientific Truth. Can any reader help correct them by pointing me at convincing evidence not merely that second hand smoke has some negative effect, which strikes me as a priori likely, but for the size of the effect? Or at a convincing critique of the NBER paper?
"All previous published studies on the health effects of smoking bans share a common methodology: they compare the outcomes in a single community that has passed a smoking ban with outcomes in a small set of nearby communities that have not passed bans. A major contribution of this paper is that we simulate the results from all possible small‐scale studies using subsamples from the national data. We find that large short‐term increases in AMI incidence following a smoking ban are as common as the large decreases reported in the published literature."
(Shetty et. al., "Changes In U.S. Hospitalization And Mortality Rates Following
Smoking Bans," NBER working paper 14790)