Q&A with Jennifer Mnookin: Raising the bar for scientific evidence in court

Istock.com/SE InnovationIstock.com/SE Innovation

This week, the President’s Council of Advisors on Science and Technology (PCAST) challenged the scientific community and the justice system to dramatically improve the reliability of scientific evidence and testimony presented in criminal courts. In the report, council members unanimously called for widespread changes to the protocols for using scientific evidence.

Jennifer Mnookin, dean of the UCLA School of Law and holder of the David G. Price and Dallas P. Price Chair at UCLA Law, served as co-chair of an advisory group of judges and academics to the council’s working group.

The report maintains that many regularly used forensic “feature-comparison” methods— including firearms identification, bite-mark comparisons, microscopic hair identification and some kinds of DNA testing — have not yet been shown to be scientifically valid and reliable.

The report urges the National Institute of Standards and Technology, the FBI and other agencies to support independent research to assess whether these techniques can meet the fundamental requirements of scientific validity. It calls for the Department of Justice to develop protocols for the admissibility of various types of scientific evidence, and encourages federal judges to curb expert testimony that overstates the reliability of the science. Mnookin and Judge Harry Edwards, with the U.S. Court of Appeals for the D.C. Circuit, published an op-ed this week in the Washington Post about these issues.

The faculty co-chair of the Program on Understanding Law, Science and Evidence at UCLA Law, Mnookin answered questions about the report’s findings and the path forward.

Is the criminal justice system badly compromised by the use of junk science or inadequate scientific standards?

Without a doubt. There have been many wrongful criminal convictions where the use of faulty forensic science evidence was a major culprit in producing injustice. In fact, overstated, erroneous or unreliable forensic science evidence shows up in about half of all the known DNA exonerations, where DNA testing that was not available at the time of trial reveals that the person who was convicted wasn’t actually the perpetrator.

Two of the most troubling kinds of forensic science evidence are bite-mark identification and microscopic hair comparison evidence. There’s absolutely no good scientific evidence that forensic odontologists can accurately identify individual dentition on the basis of bite marks left on human skin. Experts have based their testimony on their experience rather than on scientific testing, and courts have generally accepted their testimony hook, line and sinker rather than require careful validation.

Which do you consider more at fault, the science or expert testimony?

The two issues — the lack of validity of the science itself and the fairly extraordinary overstatements often used in court — are deeply related. Many kinds of pattern-identification evidence were developed as investigatory techniques for law enforcement agents, rather than in university-based research laboratories. Forensic scientists often got their training on the job, and, until recently, many didn’t have any formal background in science. The idea was that through training and experience, they’d develop the expertise to know whether a bullet came from a particular gun, or whether two fingerprints came from a common source. The problem is that doing a pattern-matching task hundreds or even thousands of times does not really establish how well you’re doing it, or how often or in what circumstances you are prone to error.

Right now, these kinds of pattern evidence do not have an established statistical foundation. There is no way a firearms examiner can say there is a “one-in-X” chance this bullet came from this particular gun, because we don’t have a statistical model or any legitimate numbers to offer. It’s a subjective judgment.

With DNA, by contrast, an expert can testify that there is, say, a one-in-a-billion chance that a person selected at random would match the biological material found at the crime scene. We can’t offer anything parallel to this probabilistic assessment with these other pattern-matching techniques. And yet, forensic scientists were — and in many cases still are — testifying in court that they are 100 percent confident about their conclusions and that it’s a practical impossibility that they are wrong. That makes for super-strong, and often super-persuasive, testimony, but it’s not scientifically legitimate.

I do want to be clear that individual forensic practitioners are generally testifying in good faith. The problem is that forensic science simply hasn’t had a “research culture,” and forensic scientists most often aren’t research scientists or particularly sophisticated about what’s required for scientific validation.

This lack of a research culture is precisely why this PCAST report is so important. Notwithstanding the lack of a research culture in forensic science, the courts have been giving these forensic techniques something of a pass, and they really shouldn’t. There simply cannot be some kind of “forensic science exception” to the basic idea that scientific validation requires appropriate testing.

In this age of technology, significant scientific breakthroughs are happening at a rapid pace. Does this present challenges to the court system?

Of course! Come into any law school classroom and ask the students how many of them have science backgrounds. Some do, but it’s a pretty small minority. And the same is true for judges. So when courts have to assess the admissibility and validity of new techniques, that’s not necessarily within judges’ comfort zones.

And, of course, there are risks in both directions, either in accepting invalid knowledge or in excluding legitimate methods of proof. With these forensic pattern identification techniques, however, I don’t think the fundamental problem is that the science is so complex or sophisticated that judges can’t evaluate its validity. I think the issues have been more institutional than that. Courts rely on precedent. If a technique has been long used, why not just keep using it? The result is that even generally thoughtful judges have tended to default to past judicial decisions that assume, rather than establish, the scientific validity of these disciplines.

What are the most important steps needed to ensure that scientific evidence is reliable?

The PCAST report suggests that we shouldn’t use forensic science techniques in court until they have met basic standards for validation. I quite agree. In the short run, taking that standard seriously might mean that we lose the ability to use some kinds of evidence. But it won’t be especially hard to establish their validity through careful testing if these techniques truly are valid and reliable. If we require appropriate validation testing, in the long run, we can be more legitimately confident about the scientific evidence we are using to support criminal convictions. We should also do more to ensure that the testimony experts provide is tied to an actual base of knowledge, and that limitations on knowledge and performance are expressed clearly and forthrightly. It would also be terrific to see some significant institutional change in forensic science, so that experts and crime labs weren’t so often tied to law enforcement.

It’s really not a question of being pro-prosecution or pro-defense. It’s about being pro-science and pro-justice. We should all want forensic science evidence to be reliable.