From the uncorrected final rough draft manuscript of Gary Kleck, TARGETING GUNS : FIREARMS AND THEIR CONTROL scheduled to be published in late 1997:
For a researcher to be a "professional" implies, at minimum, two things: (1) a mastery of the existing body of knowledge, and (2) advanced formal training in the research methods of the field, sufficient to enable the person to make useful new contributions that improve on prior research. Merely publishing articles in professional journals is no guarantee that the author is a professional. Judging from what is allowed into print in medical journals, the referees evaluating paper submissions to the journals seem to be amateurs, whose only claim to expert status may be a record of previous publications in medical journals, publications likewise refereed by amateurs.
While it is possible for amateurs to make useful contributions with respect to topics requiring more modest skills, such as simple descriptions of phenomena, they are far more likely to make serious mistakes in tackling more complex topics, such as the causal linkages between phenomena. Based solely on the quality of published work, it is clear that the researchers who publish their work on guns and violence primarily in medical journals are almost all amateurs, in the sense that they have neither an adequate mastery of the body of relevant substantive knowledge nor professional-level skills in applying the appropriate research methods. Although there is nothing about a medical degree which disqualifies someone from doing research on the links between guns and violence, there is also nothing in medical school training that adequately prepares one to do such work.
As a result of these limitations, combined with a willingness to combine scholarship with personal advocacy of a political agenda (Kates et al. 1995; see Teret et al. 1990 for an overt defense of advocacy research), research published in medical outlets is commonly of poor quality. The major exceptions are (1) research on the more traditionally medical aspects of gun violence, such as the frequency and nature of gunshot wounds and their treatment, and (2) research providing simple descriptions of gun homicides, suicides, accidents, and nonfatal woundings, usually based on examination of hospital and medical examiner records. Medical researchers are better qualified than others to do the former kind of work, while the latter requires only modest research skills.
Unfortunately, the central research issues in the guns-violence field are ones of causal linkages. Does greater gun availability cause more violence? Do higher violence rates cause higher gun ownership rates? Does the use of a gun by an aggressor cause higher risks of attack, injury, or death -for the victim? Does the use of a gun by a victim cause lower risks of attack, injury, or death for the victim? It is fair to describe the work of medical researchers on these issues as almost uniformly incompetent, and consistently biased.
This sort of sweeping assessment should be backed up by specifics as to the ways in which this research is substandard.
The following is a brief and undoubtedly incomplete list of some of the more common and serious shortcomings of medical studies drawing conclusions about guns-violence causal links.
(1) An ignorance of prior research, and an almost complete ignorance of the technically sound research, with reviews limited to the generally primitive research published in medical journals (consult any review published in a medical journal and compare it to studies cited in PB).
(2) As a result of the preceding problem, an ignorance of the basic issues, i.e. what questions need to be addressed. For example, the important policy issue is not the association of gun levels with rates of gun violence (e.g. the gun homicide rate or the gun suicide rate) but rather the association of gun levels with rates of total violence (e.g. the total homicide rate or the total suicide rate) (e.g., Violence Prevention Task Force of the Eastern Association for the Surgery of Trauma 1995, p. 165).
(3) (Related to the preceding problem): Confusing the effects of the weapon on violence with effects of the intentions of the user of the weapon (or a failure to recognize and acknowledge that this is even an issue). That is, fatality rates in gun attacks may be higher than in knife attacks because those who choose guns are more lethal in their intentions, and this, rather than the weapon itself, may at least partly account for the fatality rate difference.
(4) The failure to address (or even be aware of) the possible two-way relationship between gun availability and violence, leading to a confusion of cause and effect. Thus, in cases where higher violence levels caused more people to acquire guns, researchers misinterpreted the positive association to mean that higher gun levels had caused higher violence rates (e.g. Sloan, Kellermann, et al. 1988; Killias 1993a).
(5) The use of small, unrepresentative, nonprobability local convenience samples that preclude generalization to large populations, and increase the likelihood of sample bias (e.g. Brent et al. 1988, based on 27 adolescent suicide victims in the Pittsburgh area and matching controls; or Kellermann et al. 1993, who drew conclusions about the entire gun-owning population based on a nonprobability sample of households in the highhomicide areas of just three nonrandomly selected urban counties).
(6) The use of primitive bivariate data analytical methods that fail to control for confounding factors (e.g. Sloan, Kellermann et al. 1988; Killias 1993a; Lester 1989b). The Sloan, Kellermann et al. comparison of Seattle and Vancouver, published in the most prestigious journal in medicine, the New England Journal of Medicine, was arguably the most technically primitive study of guns-violence links ever published in a professional journal, in a field where poor research abounds. It used dubious measures of gun ownership that turned out to be inaccurate, bivariate data analysis, and a sample consisting of a grand total of two nonrandomly selected cities. It was described by Professor James Wright (1989) as "little more than polemics masquerading as serious research."
The reaction in the community of medical researchers, on the other hand, was very different. The study was lavishly praised by Dr. Garen Wintemute (1989-90, p. 21) as "an elegant evaluation of firearm mortality" and commended by CDC employees James Mercy and Vernon Houk (1988, p. 1284) for its application of "scientific methods."
(7) Ignorance about valid measures of gun availability, or about the limits of those measures (e.g. Sloan, Kellermann et al 1988) or the willingness to assert an association between gun availability and violence when the authors did not even measure gun availability (e.g. Farmer and Rohde 1980; Boyd 1983; Boyd and Mowscicki 1986; Wintemute 1987; Marzuk et al. 1992; Tardiff et al. 1994)
The consequences of ignorance about gun measurement flaws are especially apparent in the comparison of Seattle with Vancouver by Sloan, Kellermann and their colleagues (1988). They compared noncomparable rates of issuance of very different kinds of gun permits (Wright 1989, p. 46) and inappropriately applied an indirect measure of "gun prevalence." When direct survey measures later became available, however, they indicated that the household prevalence levels of gun ownership in the two cities were essentially identical: 23% in greater Vancouver (Mauser 1989) and 24% in Seattle (Callahan et al. 1994, p. 475).
(8) Selective reporting of findings favorable to pro-control or anti-gun conclusions and nondisclosure of unsupportive findings (e.g. Lester 1991a and Killias 1993b, discussed earlier in this chapter; see also examples cited in Kates et al. 1995).
(9) Lumping together disparate gun-related forms of violence such as intentionally and unintentionally inflicted injuries, and self-inflicted and other-inflicted injuries, and drawing conclusions about all gun violence as a single entity, in a way that obscures the radically different influences guns have on each different form of violence (e.g. Sadowski et al. 1989).
(10) Lumping together children and adolescents, in a way that conceals how little gun violence involves the former and how much involves the latter. E.g. Teret and Wintemute (1983) claimed that "almost 1,000 children die each year from unintentional gunshot wounds," a statistic that turned out to actually refer to all persons aged 0-24; almost all of the gun accident deaths in this age range involved adolescents and young adults, not children (PB:277, 309). In 1992, only 18% of accidental gun deaths in this age range involved children age 0-12, among whom the death rate is virtually zero (National Safety Council 1995, p. 32; PB:277). This practice serves propagandistic purposes by playing on people's strong feelings about children, and their greater willingness to view children (vs. adolescents) as innocent victims of violence. It also directs disproportionate attention to potential solutions that are relevant only to the rare child-involved case (e.g. "child-proofing" guns to prevent gun accidents involving shooters young enough to be affected by the measures - see Nelson et al. 1996, p. 1746 for an example).
(11) Unprofessional bias in interpretation of results, such that all results lead to pro-control/anti-gun conclusions, regardless of the character of the evidence. That is, the procontrol/anti-gun propositions are treated as nonfalsifiable hypotheses. E.g., Callahan and his colleague's (1994) evidence indicated, by their own admission, that a gun buy-back program was ineffective, prompting the authors to call for more and better buyback programs. Thus, it was clear that these authors were prepared to support buy-back programs no matter how negative their research results were (see Kleck 1996b, pp. 31-32 for a critique).
(12) A lack of simple common sense, as when medical researchers attempted to establish how rarely gun owners conceal their gun ownership when interviewed in surveys, by studying a sample of registered gun owners (Kellermann et al. 1990). Given that all registered gun owners have, by definition, already shown themselves to be willing to let strangers know that they own guns, the fact that nearly all of this small and unrepresentative sample of gun owners told interviewers that they own guns obviously can tell us nothing about how large a share of the general gun owner population is not willing to tell strangers that they own guns. Or consider Kellermann's assumption that if victims of "home invasions" do not tell police about their defensive uses of guns, it means they did not occur (Kellermann et al. 1995; see Chapter 5 herein for further commentary).
In a typical example, a researcher might assert that gun levels increase homicide rates, and cite five supposedly supportive studies. Of the five, one might provide weak and debatable support, one might be merely a previous expression of similar personal opinion on the issue (often in an editorial or propaganda publication), another might not even address the topic, still another might have drawn non sequitur conclusions based on irrelevant information, and one might even have generated evidence indicating the exact opposite conclusion.
Consider, for example, a widely cited article by Arthur Kellermann and his colleagues (l993, p. 1090), wherein they claim that "cohort and interrupted time-series studies have demonstrated a strong link between the availability of gun and community rates of homicide," citing four previous studies in support (their cites 2 and 15-17). Naive readers might assume that Kellermann et al. were citing four "cohort and interrupted time-series studies" that had empirically documented a guns- homicide association. In fact, the first of the cited publications was not an empirical study at all, but rather a review of the literature (their cite 2) which did not support this assertion. The part of this report cited by the Kellermann et al. (pp. 42-97) did not even address the issue in question, while the part that did address it did not conclude that there was a strong guns-homicide link. Instead, the review authors cited an earlier review as having indicated that studies "qenerally find that greater gun availability is associated with ... somewhat greater rates of felony murder, but do not account for a large fraction of the variation," and drew an unmistakably "no decision" conclusion, noting problems of causal order that Kellermann and his medical colleagues have consistently ignored (Reiss and Roth 1993, p. 268, emphasis added). Perhaps Kellermann simply did not know enough of the rudiments of statistics to understand that to state that one variable does not account for a large fraction of the variation in the other variable is the same as saying that the association is not strong, i.e. exactly the opposite of the conclusion Kellermann et al. were citing the Reiss and Roth review to support.
Among the remaining three supposedly supportive studies, one did not even measure "the availability of guns," nor did its authors claim to have done so, and thus it could not possibly have established any guns-homicide association, never mind a strong one (their cite 17, to Loftin et al. 1991). This was an interrupted time-series study, but neither it nor any such study has ever measured any association between gun availability and homicide rates. None of the cited studies were "cohort" studies. The third cited study only weakly supported the authors' claim (cite 15, Cook 1979). This study addressed only robbery homicides, which accounted for only 10% of U.S. homicides the year Kellermann et al. wrote (U.S. Federal Bureau of Investigation 1994, p. 21), and apparently confused an effect of robbery murders on rates of gun ownership with an effect of gun levels on robbery murder rates. Finally, the authors also cited a crude two-city study of their own (Sloan, Kellermann, Reay, et al. 1988) that simply ignored the causal order issue, used indirect measures of gun ownership that turned out to be inaccurate, and drew conclusions that ignored pronounced differences between the cities that were responsible for some or all of the observed differences in homicide.
In sharp contrast, the authors completely omitted any mention of the most sophisticated research on the links between gun levels and homicide rates, research that did address the causal order issue, measured gun ownership levels, and used multivariate controls. Perhaps it is just coincidence that these more sophisticated but uncited analyses generally found no causal effect of gun levels on homicide rates (Kleck 1984a; Magaddino and Medoff 1984 [only some of whose results were based on models taking two way relationships into account; see pp. 251, column 1, p. 253, column 1 and p. 258 for the relevant results]; Kleck 1991, pp. 191201, 219-222; but see Kleck 1979, whose results were superceded by Kleck 1984a). Perhaps Kellermann and his colleagues felt that studies that did not even measure gun availability provided a sounder basis for drawing conclusions about the gunshomicide link than studies that did.
Or consider a report by the "Violence Prevention Task Force of the Eastern Association for the Surgery of Trauma," whose members stated that "there is compelling evidence that relates the mere availability of firearms to the occurrence of firearms deaths" (1995, p. 165). Leaving aside the somewhat tautological linking of guns to gun deaths (the main policy issue is the link between guns and total deaths, not gun deaths), the authors cite five studies in support (their cites 3, 9, 17, 37, and 38), two of which are reviews with no original information and just as pronounced a bias as that of the physician authors (cites 3 and 9), two of which were opinion pieces (cites 17 and 37), and one of which was a more mildly biased but outdated review by Philip Cook, published 14 years earlier (cite 38). Cook's considerably more complete and contemporary review (1991) was not cited, though perhaps it was just coincidence that this review was not nearly so enthusiastic about the evidence linking guns and violent deaths. As is customary in medical articles, none of the far more sophisticated contrary evidence was cited (see PB, Chapters 7-9 or Kleck 1995 for a summary).
It is also common for medical writers to buttress their own opinions by referring to other people's opinions, as expressed in editorials, letters-to-the-editor, and speeches, as well as scholarly articles, without, however, making it clear to their readers that they are merely citing other people's personal opinions. For example, CDC employees James Mercy and Mark Rosenberg, along with other colleagues (Saltzman et al. 1992), claimed that gun restrictions do not lead to increased homicide without guns, and padded out the list of citations supposedly supporting this dubious claim by citing an editorial written by another person sharing the same personal opinion (p. 3045, citation 21). Likewise, Rivara and Stapleton (1982) asserted that "nowhere in the Constitution is the individual guaranteed the right to possess a firearm" (p. 37), citing a single source in support that turned out to be a report of the personal opinion of a congressman who supported gun control (their cite 27). In contrast, the authors provided no citations to the scholarly literature written by Second Amendment specialists, which strongly supports the conclusion that this Amendment did indeed recognize an individual right to possess guns and other weapons (see the 25 scholarly articles and chapters listed in Kates et al. 1995, pp. 519-520; Kates' own seminal 1982 article and the earlier studies it reviewed; Reynolds 1995).
In other cases, the "citation" problem is really the failure to locate any supporting sources to cite. Rivara and Stapleton also made the remarkable statement that "most homicides are committed by family members without prior criminal convictions" (1982, p. 37) . For this claim, they did not cite any supportive studies at all, because there are none. The latest national data available at the time they wrote indicated that only 16% of homicides in 1980 were committed by family members, with or without criminal convictions (U.S. Federal Bureau of Investigation 1981, p. 12), a figure nowhere near a majority.
Runyan and Gerken (1989) stated that research studies "suggest gun control] can reduce violence" (p. 2275), citing for support four studies (their cites 53-56), two of which unambiguously concluded exactly the opposite (their cites 54 and 55). Likewise, Callahan et al. ( 1994) claimed that removal of guns from homes would decrease the risk of suicide, citing in support a two-city study whose findings indicated precisely the opposite (no overall difference in suicide, though higher suicide levels among 15-24 year-olds and lower levels among 35-44 year-olds in the higher gun level city see their cite 9). In another example, Callahan and Rivara (1992, p. 5041) claimed that "one in twenty" students carried guns to school, citing a report of a 1990 national survey for support. In fact, the very first page of that report explicitly stressed, in no uncertain terms, that the survey did not even ask about carrying guns to school. No reliable source has ever indicated gun carrying in schools to be even a tenth as high as five percent (see Chapter 6).
Even more imaginatively, Schetky (1985) claimed that murders of wives by their husbands "accounted for 11.1% of all homicides in 1975" (p. 229), citing for support p. 15 of the 1975 Uniform Crime Reports (UCR). Neither the 11.1% statistic, nor any other statistic on the frequency of husband-wife homicides appeared in that report, on p. 15 or anywhere else, nor did that figure appear in any other UCR report. The first UCR to report relevant data was that for 1977, and it indicated that such killings accounted for only 5. 8% of homicides (U.S. Federal Bureau of Investigation 1978, p. 12), a share that only declined in subsequent years before Schetsky's article was published, eventually declining to 3.7% by 1994 (U.S. Federal Bureau of Investigation 1995, p. 19).
Arthur Kellermann and Donald Reay (1986) cited similarly nonexistent published statistics to support their assertion that "less than 2 percent of homicides nationally are considered legally justifiable" (p. 1559, citing sources 11 and 13). Neither FBI publication offered any statistics on this matter, never mind a figure "less than 2 percent." Further, although the FBI had unpublished data pertaining to some subtypes of "legally justifiable" homicides, these covered only a small subset of lawful defensive homicides (PB:112).
Dr. Kellermann in particular seems to have problems in finding and citing supportive evidence for his claims. In a 1992 article (Kellermann et al. 1992), he and his coauthors claimed that "limiting access to firearms could prevent many suicides" (p. 467), citing in support a study (Rich et al. 1990) that had drawn precisely the opposite conclusion. - Rich and his colleagues had summarized the findings of their two studies as-indicating that "gun control legislation may have led to decreased use of guns by suicidal men, but the difference was apparently offset by an increase in suicide by leaping. In the case of men using guns for suicide, these data support a hypothesis of substitution of suicide method" (p. 342).
In that same article, Kellermann and his colleagues cited an impressive list of six studies (their cites 1015, p. 467) that they claimed had "studied variations in the rates of gun ownership and suicide" across different areas or over time. Of these six, four did not measure gun ownership at all, and thus could not have studied variations in gun ownership (their cites 10, 12, 14, 15), while one other studied "variation" across just two cities, using measures of gun ownership that turned out to be invalid (their cite 13). Perhaps it was just coincidence that these falsely cited studies generally drew conclusions supportive of gun control, while the many studies that were relevant (having actually measured the association between gun ownership and suicide rates) but that were not cited, overwhelmingly indicated no significant association between gun levels and total suicide rates (Lester 1987; 1988a; 1988c; 1989b; Clarke and Jones 1989; Lester 1990; Sloan et al. 1990; PB:255256, 268; Killias 1993b; two dissenting studies were Lester 1989b; Moyer and Carrington 1992).
Likewise, in an earlier article, Dr. Kellermann and his colleagues (Sloan, Kellermann, et al. 1988, p. 1256) stated that "some have argued that restricting access to handguns could substantially reduce our annual rate of homicide," impressively citing as one of three supporting sources the most comprehensive review of the guns-violence literature available at the time, written by James Wright and his colleagues (1981). While many have indeed "argued" this assertion, Wright et al. were not among them. Their massive review of the pre-1981 literature lead them to conclude, in the government report version of the review, that "there is some evidence that under some conditions, reductions in gun-related crimes can be achieved through gun control legislation, but this outcome will be neither very common nor especially pronounced" (Wright et al. 1981, p. 541). In the commercial version of their report, they were even more pessimistic: "the probable benefits of stricter gun controls (itself a highly nebulous concept) in terms of crime reduction are at best uncertain, and at worst close to nil... our view is that the prospects for ameliorating the problem of criminal violence through stricter controls over the civilian ownership, purchase, or use of firearms are dim" (Wright et al 1983, p. 22).
At least equally common are citations to studies that do not support the medical writer's claims for the simple reason that the studies did not provide any relevant information at all. Two writers who usually publish in medical journals, Stephen Teret and Garen Wintemute, claimed that "two separate studies in Detroit ... showed that as rates of handgun ownership increased, the incidence of [accidental] shootings increased as well" (1983, p. 347, their footnotes 29 and 30). Neither of these studies even measured trends in handgun ownership (though they noted- trends in "new permits to purchase firearms," a measure with no known association with rates of handgun ownership), nor did they establish an association with trends in the frequency of shootings. Even more misleadingly, one of the "two separate studies" had no new data on trends in Detroit accidental shootings at all, but merely reprinted data presented in the other cited study (compare Heins et al 1974, p. 328 with Newton and Zimring 1969, p. 71).
Or consider the similar claim of Dr. Kenneth Tardiff and his colleagues that homicide increases observed in their study might be due to "increased availability of firearms among high school students" (p. 45), a claim supported by reference to three sources (their citations 17, 42, 43), none of which contained any information on trends in availability of firearms in any group, never mind increases among high school students.
Likewise, consider Dr. Diane Schetky's claim that a "handgun purchased for self-defense is more likely to be used on another family member than on an intruder" (1985, p. 229), buttressed by reference to two sources (her References 1 and 7), neither of which reported any data on how likely guns are to be "used" on intruders. One was a propaganda publication from the head of Handgun Control Inc., and the other was a government report recommending a national ban on handguns, and both alluded only to the small share of defensive gun uses that result in the deaths of intruders, remaining silent about the total number of times guns are defensively "used," either on intruders or in general.
A CDC report on gun carrying alluded to the "apparent effectiveness of prohibiting public firearm-carrying for reducing firearm-related homicides," citing two studies in support (citations 9 and 10). Neither study addressed the impact of laws prohibiting firearm-carrying per se, but rather examined the effects of changes in how violators of such laws are punished (mandatory vs. discretionary sentencing). One of the studies did not even separately examine firearms homicides, and with respect to total homicides concluded that "no statistically significant changes in the homicide rate were observed" (Deutsch and Alt 1977, p. 566). Similarly, Webster et al. (1993, p. 1604) alleged that "weapon carrying can increase risks ... to the individual carrying the weapon" and cited two studies in support (their cites 7 and 8). Neither cited study even addressed the risks of weapon carrying, never mind supported the allegation.
Another CDC report (CDC 1992) discussed "risk factors, such as access to firearms" and their possible "impact on unintentional firearm related mortality," alleging that "the availability of firearms has been directly associated with unintentional gunshot wounds" (p. 452), and citing a single study for support (their cite 5). The cited study did not establish any such association, because the authors had no measure of "availability of firearms" and therefore could not have established any association at all, never mind a "direct" one. Quite to the contrary, the authors expressed, explicitly and at length, considerable frustration about their inability to get any gun availability data (Rushforth et al. 1975, p. 504). In contrast, a sophisticated and comprehensive study of 170 large cities in the U.S., which did measure gun availability, was not cited, though it was available to the CDC authors. That study found no significant association between gun availability and the fatal gun accident rate (PB:303304, 319).
It is testimony to the endemic prevalence of miscitation in the medical literature on guns and violence that the authors of this last miscited study (Rushforth et al. 1975) had themselves miscited a still earlier piece of research. Rushforth and his colleagues stated that fatal gun accident rates had shown "very little change in rates over the past 20 years" (1975, p. 503), citing for support a single study, whose author had actually stated that gun accident rates had "steadily decreased since the 1930s" (DiMaio 1973, p. 2). To acknowledge decreases in gun accident rates, during an era of increasing gun ownership, would make it harder to persuade readers that more guns lead to more gun accidents, but perhaps this was unrelated to the authors' miscitation.
Rushforth et al. are hardly the only public health writers who have failed to inform their readers that fatal gun accidents (FGAs) have declined as gun ownership has increased in recent decades. CDC employees Nollie Wood and James Mercy (1988) reported that FGAs declined over the period 1970-1984, but apparently did not consider it relevant to mention that the size of the U.S. gun stock increased by 74%, and the size of the handgun stock increased by 105%, over this same period (PB:50, based on data available to Wood and Mercy). This seems a curious omission in light of the fact that CDC employees rarely miss an opportunity to comment on time periods when both gun ownership and violence increased together (see also Kates et al. 1995, pp. 557-561 for an extended discussion of what they refer to as the "fraudulent suppression of the steep decline in fatal gun accidents").
Some medical/public health researchers make false citations even when it is unnecessary, in the course of making uncontroversial points. David Hemenway and his colleagues from the Harvard School of Public Health (1995, p. 48) cited a National Crime Victimization Survey (NCVS) report as indicating that "49% of US households contain a firearm" (their cite 28). The NCVS does not ask about gun ownership and neither this report nor any other NCVS report contains information on gun prevalence. The identical miscitation was repeated a year later in the same journal by Sinauer, Annest, and Mercy (1996). Unless this was an extraordinary coincidence, the latter authors probably made the mistake because they copied the Hemenway miscitation without bothering to check its authenticity.
It bears stressing that the authors were not the only people complicit in the foregoing miscitations. In each case, inaccurate citations that would have been conspicuous to genuinely expert reviewers went uncorrected because both journal editors and the supposedly expert referees who reviewed the papers failed to recognize the mistakes and ensure that they were fixed, indicating that these parties either were not familiar enough with the research literature in this field to recognize the-inaccuracies, or were not concerned enough about miscitations in a pro-control direction to make sure that they were corrected. Kates and his colleagues identified the costs of incompetent and biased peer review: "An atmosphere in which criticism in general, and peer review in particular, comes from only one perspective not only allows error, but promotes it" (1995, p. 530).
Perhaps some of the foregoing examples of false citation were not due to self-conscious dishonesty but rather to a genuine inability to understand previous research, labelled "gun-aversive dyslexia" by Kates and his colleagues when it is "engendered by a fear and loathing of guns so profound that health advocate sages who encounter adverse facts may be honestly unable to comprehend them" (1995, p. 531; see this source for numerous examples).