By Alex Calderwood.
DNA evidence carries an aura of the indisputable, thought of as the “gold standard” in forensic science. Because of this power, law enforcement has asked forensic laboratories to interpret ever more challenging evidence — “touch” DNA evidence like a gun or cell phone which are often handled by a number of people. Complex computer algorithms were developed to meet this challenge, known as probabilistic genotyping programs. But these systems have been shown to produce different results, sometimes even contradicting each other, putting their validity into question.
Examining these contradictions is the focus of the 2018-2019 Magic Grant Decoding Differences in Forensic DNA Software. The team is made up of computer scientists, a statistician, a biologist, and lawyers specializing in DNA evidence who are conducting an empirical analysis of DNA software used by New York City. Through their statistical and legal investigations, they are also contributing to an area of computational journalism known as “algorithmic accountability.” Their project will both broaden the statistical vocabulary of the legal defense community, as well as educate journalists about best practices for investigating algorithms generally.
Their Magic Grant “has been an incredible shot in the arm to our team,” says Professor Jeanna Matthews, a faculty member in the Department of Computer Science at Clarkson University and principal investigator of the project. The grant has helped Prof. Matthews fund a small research group to contrast several DNA software platforms and present their findings at various legal and computer science gatherings — including DEF CON 26, and the 2019 AAAI/ACM Conference on Artificial Intelligence, Ethics and Society. Their approach adds to the growing area of algorithmic accountability, expanding it to include not just the methodology behind an algorithm, but also its instantiation in software. By reviewing the actual code, they can assess what, precisely, the software is doing. Is it what we expect? How is it handling data? Are there bugs? Are there other surprises buried in the lines of instructions?
In addition to providing financial support to carry out their interdisciplinary work, the Magic Grant program also promotes connections between the other groups in Prof. Matthews’ 2018-19 grant cohort, building a close-knit community of practice. Prof. Matthews’ found that critiques and insights from other grantees on the storytelling aspects of her project were incredibly valuable — considering her team is mostly comprised of technical experts. This year, they have focused on communication, trying to clarify “what journalists can wordsmith” and which statistical phrasings need to be gotten right. They found that journalists are often guilty of ‘transposing the conditional’ when writing about the statistical results produced by these software systems — which can make it seem overly likely that a defendant is the source of DNA found at the crime scene.
As they go into the second half of their funding cycle, Prof. Matthews’ group is developing a systematic method for comparing the different DNA software systems available on the market — a task that is difficult even with the varied expertise represented on their team. New York City, for example, has recently switched from a software called FST (Forensic Statistical Tool), developed by the NYC government, to STRmix, developed by companies outside the US. So just as access was granted to source code for FST, the city switched to a different system with new barriers to the same level of independent testing. “It is like a game of whack-a-mole for those who believe that defendants should have the right to understand and question the evidence against them even when that evidence is generated by a computer program,” Prof. Matthews said.
In the end, their analysis might help those who have been wrongly convicted, as well as move the goalpost towards more open and accurate software systems that have the potential for such profound impact on people’s lives.
This is part of a series of articles about the progress of the 2018-2019 cohort. We publish them with the hope that it might encourage our readers to consider applying for a Magic Grant. It’s a great opportunity, capable of supporting a variety of research and creative work. Apply at brwn.co/mg. The deadline is April 8!