The Field of Firearms Forensics Is Flawed

The Field of Firearms Forensics Is Flawed thumbnail

In 2003, Donald Kennedy, then editor in chief of the journal Science, wrote an editorial called, “Forensic Science: Oxymoron? ” He answered that question with a “yes.” Unfortunately, this answer is still the same today. Unproven techniques are still used by forensic experts, and the courts continue to allow their testimony to be accepted largely unchallenged. Courts have recently recognized the scientific limitations of firearms identification. This involves comparing the bullets and cartridge cases fired by a gun to determine if they were fired by the same gun. Firearms identification, contrary to popular belief, is a field that relies heavily on smoke and mirrors.

Firearms inspectors suffer from “Sherlock Holmes Syndrome.” They claim that they can match a cartridge case or bullet with a specific gun to solve a case. Science is not on their side, however. There are very few studies on firearms and none that can reliably tell if a gun fired bullets or cartridges. Like all scientific proof, firearm identification must be consistent and evidence-based. Fundamental justice requires no less. Without such standards, there is a high chance of innocent people being convicted and letting them go free. This realization may have led courts to slowly take notice and limit firearms testimony.

Farms examiners are considered experts in the courts. Indeed, they do possess the expertise of a practitioner in the application of forensic techniques, much as a physician is a practitioner of medical tools such as drugs or vaccines. But there is a key distinction between this form of expertise and that of a researcher, who is professionally trained in experimental design, statistics and the scientific method; who manipulates inputs and measures outputs to confirm that the techniques are valid. Both types of expertise are valuable, but they serve different purposes. The nurse is the best person to help you with COVID vaccines. If you want to know if the vaccine is safe and effective, however, you shouldn’t ask the nurse. Instead, ask research scientists who have the knowledge and experience to determine how the vaccine was created.

Courts have rarely heard testimony from research scientists with classical training who can verify claims made firearms examiners and explain the basic principles of science. Only research scientists have the wherewithal to counter the claims of practitioner-experts. Anti-expert experts are what is needed. These experts are appearing in courts all across the country more often, and we are proud to be part of this group.

Skepticism about firearms identification is not a new phenomenon. A 2009 National Research Council (NRC) report criticized the firearms identification field as lacking “a precisely defined process.” Guidelines from the Association of Firearm and Tool Mark Examiners (AFTE) allow examiners to declare a match between a bullet or cartridge case and a particular firearm “when the unique surface contours of two toolmarks are in ‘sufficient agreement.’” According to the guidelines, sufficient agreement is the condition in which the comparison “exceeds the best agreement demonstrated between tool marks known to have been produced by different tools and is consistent with the agreement demonstrated by tool marks known to have been produced by the same tool.” In other words, the criterion for a life-shaping decision is based not on quantitative standards but on the examiner’s subjective experience.

A 2016 report by the President’s Council of Advisers on Science and Technology (PCAST) echoed the NRC’s conclusion that the firearms identification process is “circular,” and it described the sort of empirical studies required to test the validity of firearms identification. Only one study had been completed at that time. It was done by the Ames Laboratory of Energy.

Gunsmiths examiners vigorously attacked the NRC and PCAST reports. The reports had no impact on judicial decisions, but they did inspire additional tests for firearm identification accuracy. These studies report amazingly low error rates, typically around 1 percent or less, which emboldens examiners to testify that their methodology is nearly infallible. However, the method by which these errors are calculated is a mystery. Without anti-expert experts to explain why they are flawed, courts can and have been tricked into accepting false claims.

Firearms examiners usually reach one of three categorical conclusions during fieldwork: The bullets are from the exact same source, which is called “identification”, or a different source called “elimination,” which is used when the quality of the sample is not sufficient for identification or elimination. Although this “I don’t know” category is useful in fieldwork, it is seriously misleading due to the way it was treated in validation studies and presented in court.

The problem is how to classify an “inconclusive response” in research. Researchers studying firearm identification in laboratory settings create cartridge cases and bullets for their studies. This is not the case with fieldwork. They can tell if comparisons are from the same gun or another gun. They know the “ground truth”. This is similar to a true/false test. There are only two answers in these research studies: “I don’t really know” or “inconclusive”.

Existing studies however count inconclusive answers as correct (i.e. “not errors”) without explanation or justification. These inconclusive answers have a significant impact on reported error rates. For example, in the Ames I study, researchers reported a false-positive error rate of 1%. But here’s how they got to that: of the 2,178 comparisons they made between nonmatching cartridge cases, 65 percent of the comparisons were correctly called “eliminations.” The other 34 percent of the comparisons were called “inconclusive”, but instead of keeping them as their own category, the researchers lumped them in with eliminations, leaving 1 percent as what they called their false-positive rate. If, however, those inconclusive responses are errors, then the error rate would be 35 percent. Seven years later, the Ames Laboratory conducted another study, known as Ames II, using the same methodology and reported false positive error rates for bullet and cartridge case comparisons of less than 1 percent. However, when calling inconclusive responses as incorrect instead of correct, the overall error rate skyrockets to 52 percent.

The most striking findings were from the subsequent phases of Ames II. Researchers sent identical items back to the examiner to reevaluate, and then to other examiners to determine if results could be repeated or reproduced by another examiner. The findings were shocking: The same examiner looking at the same bullets a second time reached the same conclusion only two thirds of the time. Different examiners who looked at the same bullets came to the same conclusion in less than a third of cases. It’s not easy to get a second opinion. Yet, firearms examiners still appear in court to claim that firearms identification studies show a low error rate.

The English biologist Thomas Huxley once famously stated that science is nothing but organized and trained common sense. In most cases, judges have a high level of common sense. Judges need scientists to translate science for courtroom use. This assistance must not be limited to published articles and scientific reports. Scientists are needed in courtrooms. One way to do this is by serving as an anti-expert expert.

This is an opinion and analysis piece. The views expressed by the author/authors are not necessarily those Scientific American.

ABOUT THE AUTHOR(S)

author-avatar

    David L. Faigman is chancellor and dean and John F. Digardi Distinguished Professor of Law at the University of California, Hastings, College of the Law. Faigman is a regular speaker at judicial conferences on the strengths and weaknesses in forensic specialties. He has testified in more then a dozen cases involving firearms and other forensic areas. He was a senior adviser to President Barack Obama’s President’s Council of Advisors on Science and Technology’s (PCAST’s) 2016 report.

      Nicholas Scurich is a professor with a joint appointment in the Department of Psychological Science and the Department of Criminology, Law and Society at the University of California, Irvine. Scurich studies topics in applied decision-making as well as the assessment of dangerous and risky behavior. He has testified before state and federal courts regarding firearm identification, as well as other topics that intersect with science and law.

        Thomas D. Albright holds the Conrad T. Prebys Chair in Vision Research at the Salk Institute for Biological Studies, where he is a professor and director of the Vision Center Laboratory. Albright co-chaired the National Academy of Sciences committee on eyewitness identification, served on the National Commission on Forensic Science and is curren

        Read More