Wednesday, June 15, 2016

Power Line


http://www.powerlineblog.com/archives/2016/06/epic-correction-of-the-decade.php

EPIC CORRECTION OF THE DECADE

Hoo-wee, the New York Times will really have to extend itself to top the boner and mother-of-all-corrections at the American Journal of Political Science. This is the journal that published a finding much beloved of liberals a few years back that purported to find scientific evidence that conservatives are more likely to exhibit traits associated with psychoticism, such as authoritarianism and tough-mindedness, and that the supposed “authoritarian” personality of conservatives might even have a genetic basis (and therefore be treatable someday?). Settle in with a cup or glass of your favorite beverage, and get ready to enjoy one of the most epic academic face plants ever.
The original article was called “Correlation not causation: the relationship between personality traits and political ideologies,” and was written by three academics at Virginia Commonwealth University. Here’s the relevant part of the abstract:
Work in psychology, behavioral genetics, and recently political science, however, has demonstrated that political preferences also develop in childhood and are equally influenced by genetic factors. These findings cast doubt on the assumed causal relationship between personality and politics. Here we test the causal relationship between personality traits and political attitudes using a direction of causation structural model on a genetically informative sample. The results suggest that personality traits do not cause people to develop political attitudes; rather, the correlation between the two is a function of an innate common underlying genetic factor.
After the usual long winding path through the existing literature and exhausting discussion of their methodology, we get to some analysis and conclusions, and this is where the fun starts. There’s a lot of jargon and highly technical discussion as usual, but some comprehensible copy:
In line with our expectations, P [for “Psychoticism”] (positively related to tough-mindedness and authoritarianism) is associated with social conservatism and conservative military attitudes. Intriguingly, the strength of the relationship between P and political ideology differs across sexes. P‘s link with social conservatism is stronger for females while its link with military attitudes is stronger for males. We also find individuals higher in Neuroticism are more likely to be economically liberal. Furthermore, Neuroticism is completely unrelated to social ideology, which has been the focus of many in the field. Finally, those higher in Social Desirability are also more likely to express socially liberal attitudes.
Here I must explain that “Social Desirability” is a social science term that essentially translates into common sense language as someone who self-consciously wants to get along. Keep this in mind as we get to the epic correction. Keep also in mind where the authors also express some surprise that “neurotic” people would turn out to be liberals and support the welfare state:
People higher in Neuroticism tend to be more economically liberal. What is intriguing about this relationship is that it is in the opposite direction of what past theories would predict. . . That is, neurotic people are more likely to support public policies that provide aid to the economically disadvantaged (public housing, foreign aid, immigration, etc).
Now if you’re still with me, take in the opening of this very long correction:
The authors regret that there is an error in the published version of “Correlation not Causation: The Relationship between Personality Traits and Political Ideologies” American Journal of Political Science 56 (1), 34–51. The interpretation of the coding of the political attitude items in the descriptive and preliminary analyses portion of the manuscript was exactly reversed.
I’m just going to let that sit there for a moment while you swallow your beverage and put your cup or glass down so as not to risk damage to your keyboard. To continue:
Thus, where we indicated that higher scores in Table 1 (page 40) reflect a more conservative response, they actually reflect a more liberal response. Specifically, in the original manuscript, the descriptive analyses report that those higher in Eysenck’s psychoticism are more conservative, but they are actually more liberal; and where the original manuscript reports those higher in neuroticism and social desirability are more liberal, they are, in fact, more conservative.
If you go back to the excerpts above and swap out the ideological categories you will have to suppress a horselaugh. Liberals are more prone to “psychoticism” (which the authors hasten to explain doesn’t meant “psychotic,” but what the hell. . .), and hence authoritarianism, which would come as no surprise to any conservative who pays attention to authoritarian liberalism. And people higher in Social Desirability will turn out to be conservatives, which is also congruent with the many simpler survey findings that conservatives are happier than liberals.
If you continue with the explanation in the correction it would seem to suggest that someone simply transposed the data somewhere along the line during the coding steps. Or maybe the authors were hoping for a job with Dan Rather or Katie Couric if tenure didn’t come through? They are defending themselves by saying that the main point of the paper was to demonstrate the magnitude of correlations between personality traits and sociopolitical attitudes, and hence that the ideological direction of the correlation doesn’t matter. This doesn’t wash well with the great folks at the indispensible Retraction Watch, who interviewed one of the academics who spotted the mistake, Steven Ludeke of the University of Southern Denmark, who said:
The erroneous results represented some of the larger correlations between personality and politics ever reported; they were reported and interpreted, repeatedly, in the wrong direction; and then cited at rates that are (for this field) extremely high. And the relationship between personality and politics is, as we note in the paper, quite a “hot” topic, with a large number of new papers appearing every year. So although the errors do not matter for the result that the authors (rightly) see as their most important, I obviously think the errors themselves matter quite a lot, especially for what it says about the scientific process both pre- and post-review.
In other words, if this study hadn’t come out conforming to the liberal narrative and sliming conservatives, it wouldn’t have attracted much notice. By the way, your tax dollars paid for this essential social science research. A note at the end says, “The data for this article were collected with the financial support of the National Institutes of Health.” And people wonder why Republicans in Congress want to cut off federal funding for social science research. As an alternative, I suggest redirecting federal social science funds to Retraction Watch.
And cue Emily Litella whenever you’re ready.
9.3KSHARES

Monday, June 6, 2016

Ah, the glories of reproducible laboratory science!


One evening in November of 2002, Carol Batie was sitting on her living-room couch in Houston, flipping through channels on the television, when she happened to catch a teaser for an upcoming news segment on KHOU 11, the local CBS affiliate. She leapt to her feet. “I scared the kids, I was screaming so loud,” Batie told me recently. “I said, ‘Thank you, God!’ I knew that all these years later, my prayers had been answered.”
The subject of the segment was the Houston Police Department Crime Laboratory, among the largest public forensic centers in Texas. By one estimate, the lab handled DNA evidence from at least 500 cases a year—mostly rapes and murders, but occasionally burglaries and armed robberies. Acting on a tip from a whistle-blower, KHOU 11 had obtained dozens of DNA profiles processed by the lab and sent them to independent experts for analysis. The results, William Thompson, an attorney and a criminology professor at the University of California at Irvine, told a KHOU 11 reporter, were terrifying: It appeared that Houston police technicians were routinely misinterpreting even the most basic samples.
“If this is incompetence, it’s gross incompetence … and repeated gross incompetence,” Thompson said. “You have to wonder if [the techs] could really be that stupid.”
Carol Batie watched the entire segment, rapt. As soon as it ended, she e-mailed KHOU 11. “My son is named Josiah Sutton,” she began, “and he has been falsely accused of a crime.” Four years earlier, Batie explained, Josiah, then 16, and his neighbor Gregory Adams, 19, had been arrested for the rape of a 41-year-old Houston woman, who told police that two young men had abducted her from the parking lot of her apartment complex and taken turns assaulting her as they drove around the city in her Ford Expedition.
A few days after reporting the crime, the woman spotted Sutton and Adams walking down a street in southwest Houston. She flagged down a passing patrol car and told the officers inside that she had seen her rapists. Police detained the boys and brought them to a nearby station for questioning. From the beginning, Sutton and Adams denied any involvement. They both had alibis, and neither of them matched the profile from the victim’s original account: She’d described her assailants as short and skinny. Adams was 5 foot 11 and 180 pounds. Sutton was three inches taller and 25 pounds heavier, the captain of his high-school football team.
The DNA evidence was harder to refute. Having seen enough prime-time TV to believe that a DNA test would vindicate them, Sutton and Adams had agreed, while in custody, to provide the police with blood samples. The blood had been sent to the Houston crime lab, where an analyst named Christy Kim extracted and amplified DNA from the samples until the distinct genetic markers that swim in every human cell were visible, on test strips, as a staggered line of blue dots.

Josiah Sutton with his mother in 2003, a week after his release from prison. Sutton served four years for sexual assault before he was exonerated on the basis of faulty DNA evidence. (Michael Stravato / AP)
Kim then compared those results with DNA obtained from the victim’s body and clothing and from a semen stain found in the back of the Expedition. A vaginal swab contained a complex mixture of genetic material from at least three contributors, including the victim herself. Kim had to determine whether Sutton’s or Adams’s genetic markers could be found anywhere in the pattern of dots. Her report, delivered to police and prosecutors, didn’t implicate Adams, but concluded that Sutton’s DNA was “consistent” with the mixture from the vaginal swab.
In 1999, a jury found Sutton guilty of aggravated kidnapping and sexual assault. He was sentenced to 25 years in prison. “I knew Josiah was innocent,” Batie told me. “Knew in my heart. But what could I do?” She wrote to the governor and to state representatives, but no one proved willing to help. She also wrote to lawyers at the Innocence Project in New York, who told her that, as a rule, they didn’t take cases where a definitive DNA match had been established.
Batie was starting to think her son would never be freed. But the KHOU 11 segment, the first of a multipart investigative series on the Houston crime lab, encouraged her. Shortly after e‑mailing the station, she received a call from David Raziq, a veteran television producer in charge of KHOU 11’s investigative unit. In the course of their work on the series, Raziq and his team had uncovered a couple of close calls with wrongful conviction—in one case, a man had been falsely accused, on the basis of improperly analyzed DNA evidence, of raping his stepdaughter. But in those instances, attorneys had managed to demonstrate the problems before their clients were sent to prison.
Batie hand-delivered the files from her son’s case to Raziq, who forwarded them on to William Thompson, the UC Irvine professor. Thompson had been studying forensic science for decades. He’d begun writing about DNA evidence from a critical perspective in the mid-1980s, as a doctoral candidate at Stanford, and had staked out what he describes as a “lonely” position as a forensic-DNA skeptic. “The technology had been accepted by the public as a silver bullet,” Thompson told me this winter. “I happened to believe that it wasn’t.”
Together with his wife, also an attorney, Thompson unpacked the two boxes containing the files from Sutton’s trial and spread them out across their kitchen table. His wife took the transcripts, and Thompson took the DNA tests. Almost immediately, he found an obvious error: In creating a DNA profile for the victim, Kim had typed three separate samples, two from blood and another from saliva. The resulting DNA profiles, which should have been identical, varied substantially. This alone was cause for serious concern—if the tech couldn’t be trusted to get a consistent DNA profile from a single person, how could she be expected to make sense of a complex mixture like the one from the vaginal swab?
Much more distressing were Kim’s conclusions about the crime-scene evidence. Examining photocopies of the test strips, Thompson saw that Kim had failed to reckon with the fact that Sutton’s DNA didn’t match the semen sample from the backseat of the Expedition. If the semen came from one of the attackers—as was almost certain, based on the victim’s account—then Kim should have been able to subtract those genetic markers, along with the victim’s own, from the vaginal-swab mixture. The markers that remained did not match Sutton’s profile.
“It was exculpatory evidence,” Thompson told me. “And the jury never heard it.”
KHOU 11 flew a reporter out to Irvine and taped a new interview with Thompson. Sutton’s case was taken up by Robert Wicoff, a defense attorney in Houston, who persuaded a Texas judge to have the DNA evidence reprocessed by a private testing facility. As Thompson had predicted, the results confirmed that Sutton was not a match. In the spring of 2003, more than four years after his arrest, Sutton was released from prison. His mother was waiting for him at the gates, her eyes bright with tears. “Going to prison, for me, was like seeing my death before it happens,” Sutton later told a local newspaper reporter.
In 2006, a cold hit in the FBI’s Combined DNA Index System, or codis, would lead police to Donnie Lamon Young, a convicted felon. Young confessed that in 1998, he and an accomplice had raped a Houston woman in her Ford Expedition. In January 2007, Young pleaded guilty to the crime.
Christy Kim was fired from the Houston crime lab, but reinstated after her lawyer argued that her errors—which ranged from how she had separated out the complex mixture to how she had reported the odds of a random match—were a product of systemic failures that included inadequate supervision. (Kim could not be reached for comment.) Sutton’s case became one of the central pillars of a public inquiry into practices at the lab. “The system failed at multiple points,” the head of the inquiry, Michael Bromwich, concluded.
Thompson was gratified by the overturning of Sutton’s conviction: The dangers he’d been warning about were obviously real. “For me, there was a shift of emphasis after Josiah,” Thompson told me. “It was no longer a question of whether errors are possible. It was a question of how many, and what exactly we’re going to do about it.” But as technological advances have made DNA evidence at once more trusted and farther-reaching, the answer has only become more elusive.
Modern forensic science is in the midst of a great reckoning. Since a series of high-profile legal challenges in the 1990s increased scrutiny of forensic evidence, a range of long-standing crime-lab methods have been deflated or outright debunked. Bite-mark analysis—a kind of dental fingerprinting that dates back to the Salem witch trials—is now widely considered unreliable; the “uniqueness and reproducibility” of ballistics testing has been called into question by the National Research Council. In 2004, the FBI was forced to issue an apology after it incorrectly connected an Oregon attorney named Brandon Mayfield to that spring’s train bombings in Madrid, on the basis of a “100 percent” match to partial fingerprints found on plastic bags containing detonator devices. Last year, the bureau admitted that it had reviewed testimony by its microscopic-hair-comparison analysts and found errors in at least 90 percent of the cases. A thorough investigation is now under way.
DNA typing has long been held up as the exception to the rule—an infallible technique rooted in unassailable science. Unlike most other forensic techniques, developed or commissioned by police departments, this one arose from an academic discipline, and has been studied and validated by researchers around the world. The method was pioneered by a British geneticist named Alec Jeffreys, who stumbled onto it in the autumn of 1984, in the course of his research on genetic sequencing, and soon put it to use in the field, helping police crack a pair of previously unsolved murders in the British Midlands. That case, and Jeffreys’s invention, made front-page news around the globe. “It was said that Dr. Alec Jeffreys had done a disservice to crime writers the world over, whose stories often center around doubtful identity and uncertain parentage,” the former detective Joseph Wambaugh wrote in The Blooding, his book on the Midlands murders.
A new era of forensics was being ushered in, one based not on intrinsically imperfect intuition or inherently subjective techniques that seemedlike science, but on human genetics. Several private companies in the U.S. and the U.K., sensing a commercial opportunity, opened their own forensic-DNA labs. “Conclusive results in only one test!” read an advertisement for Cellmark Diagnostics, one of the first companies to market DNA-typing technology stateside. “That’s all it takes.”
As Jay Aronson, a professor at Carnegie Mellon University, notes in Genetic Witness, his history of what came to be known as the “DNA wars,” the technology’s introduction to the American legal system was by no means smooth. Defense attorneys protested that DNA typing did not pass the Frye Test, a legal standard that requires scientific evidence to have earned widespread acceptance in its field; many prominent academics complained that testing firms were not being adequately transparent about their techniques. And in 1995, during the murder trial of O. J. Simpson, members of his so-called Dream Team famously used the specter of DNA-sample contamination—at the point of collection, and in the crime lab—to invalidate evidence linking Simpson to the crimes.

Alec Jeffreys in 1987, a few years after developing the technique of DNA typing (Terry Smith / LIFE Images Collection / Getty)
But gradually, testing standards improved. Crime labs pledged a new degree of thoroughness and discipline, with added training for their employees. Analysts got better at guarding against contamination. Extraction techniques were refined. The FBI created its codis database for storing DNA profiles of convicted criminals and arrestees, along with an accreditation process for contributing laboratories, in an attempt to standardize how samples were collected and stored. “There was a sense,” Aronson told me recently, “that the issues raised in the DNA wars had been satisfactorily addressed. And a lot of people were ready to move on.”
Among them were Dream Team members Barry Scheck and Peter Neufeld, who had founded the Innocence Project in 1992. Now convinced that DNA analysis, provided the evidence was collected cleanly, could expose the racism and prejudice endemic to the criminal-justice system, the two attorneys set about applying it to dozens of questionable felony convictions. They have since won 178 exonerations using DNA testing; in the majority of the cases, the wrongfully convicted were black. “Defense lawyers sleep. Prosecutors lie. DNA testing is to justice what the telescope is for the stars … a way to see things as they really are,” Scheck and Neufeld wrote in a 2000 book, Actual Innocence, co-authored by the journalist Jim Dwyer.
While helping to overturn wrongful convictions, DNA was also becoming more integral to establishing guilt. The number of state and local crime labs started to multiply, as did the number of cases involving DNA evidence. In 2000, the year after Sutton was convicted, the FBI’s database contained fewer than 500,000 DNA profiles, and had aided in some 1,600 criminal investigations in its first two years of existence. The database has since grown to include more than 15 million profiles, which contributed to tens of thousands of investigations last year alone.
As recognition of DNA’s revelatory power seeped into popular culture, courtroom experts started talking about a “CSI effect,” whereby juries, schooled by television police procedurals, needed only to hear those three magic letters—DNA—to arrive at a guilty verdict. In 2008, Donald E. Shelton, a felony trial judge in Michigan, published a study in which 1,027 randomly summoned jurors in the city of Ann Arbor were polled on what they expected prosecutors to present during a criminal trial. Three-quarters of the jurors said they expected DNA evidence in rape cases, and nearly half said they expected it in murder or attempted-murder cases; 22 percent said they expected DNA evidence in every criminal case. Shelton quotes one district attorney as saying, “They expect us to have the most advanced technology possible, and they expect it to look like it does on television.”
Shelton found that jurors’ expectations had little effect on their willingness to convict, but other research has shown DNA to be a powerful propellant in the courtroom. A researcher in Australia recently found that sexual-assault cases involving DNA evidence there were twice as likely to reach trial and 33 times as likely to result in a guilty verdict; homicide cases were 14 times as likely to reach trial and 23 times as likely to end in a guilty verdict. As the Nuffield Council on Bioethics, in the United Kingdom, pointed out in a major study on forensic evidence, even the knowledge that the prosecution intends to introduce a DNA match could be enough to get a defendant to capitulate.
“You reached a point where the questions about collection and analysis and storage had largely stopped,” says Bicka Barlow, an attorney in San Francisco who has been handling cases involving DNA evidence for two decades. “DNA evidence was entrenched. And in a lot of situations, for a lot of lawyers, it was now too costly and time-intensive to fight.”
DNA analysis has risen above all other forensic techniques for good reason: “No [other] forensic method has been rigorously shown able to consistently, and with a high degree of certainty, demonstrate a connection between evidence and a specific individual or source,” the National Research Council wrote in an influential 2009 report calling out inadequate methods and stating the need for stricter standards throughout the forensic sciences.
The problem, as a growing number of academics see it, is that science is only as reliable as the manner in which we use it—and in the case of DNA, the manner in which we use it is evolving rapidly. Consider the following hypothetical scenario: Detectives find a pool of blood on the floor of an apartment where a man has just been murdered. A technician, following proper anticontamination protocol, takes the blood to the local crime lab for processing. Blood-typing shows that the sample did not come from the victim; most likely, it belongs to the perpetrator. A day later, the detectives arrest a suspect. The suspect agrees to provide blood for testing. A pair of well-trained crime-lab analysts, double-checking each other’s work, establish a match between the two samples. The detectives can now place the suspect at the scene of the crime.
When Alec Jeffreys devised his DNA-typing technique, in the mid-1980s, this was as far as the science extended: side-by-side comparison tests. Sizable sample against sizable sample. The state of technology at the time mandated it—you couldn’t test the DNA unless you had plenty of biological material (blood, semen, mucus) to work with.
But today, most large labs have access to cutting-edge extraction kits capable of obtaining usable DNA from the smallest of samples, like so-called touch DNA (a smeared thumbprint on a window or a speck of spit invisible to the eye), and of identifying individual DNA profiles in complex mixtures, which include genetic material from multiple contributors, as was the case with the vaginal swab in the Sutton case.
These advances have greatly expanded the universe of forensic evidence. But they’ve also made the forensic analyst’s job more difficult. To understand how complex mixtures are analyzed—and how easily those analyses can go wrong—it may be helpful to recall a little bit of high-school biology: We share 99.9 percent of our genes with every other human on the planet. However, in specific locations along each strand of our DNA, the genetic code repeats itself in ways that vary from one individual to the next. Each of those variations, or alleles, is shared with a relatively small portion of the global population. The best way to determine whether a drop of blood belongs to a serial killer or to the president of the United States is to compare alleles at as many locations as possible.
Think of it this way: There are many thousands of paintings with blue backgrounds, but fewer with blue backgrounds and yellow flowers, and fewer still with blue backgrounds, yellow flowers, and a mounted knight in the foreground. When a forensic analyst compares alleles at 13 locations—the standard for most labs—the odds of two unrelated people matching at all of them are less than one in 1 billion.
With mixtures, the math gets a lot more complicated: The number of alleles in a sample doubles in the case of two contributors, and triples in the case of three. Now, rather than a painting, the DNA profile is like a stack of transparency films. The analyst must determine how many contributors are involved, and which alleles belong to whom. If the sample is very small or degraded—the two often go hand in hand—alleles might drop out in some locations, or appear to exist where they do not. Suddenly, we are dealing not so much with an objective science as an interpretive art.
A groundbreaking study by Itiel Dror, a cognitive neuroscientist at University College London, and Greg Hampikian, a biology and criminal-justice professor at Boise State University, illustrates exactly how subjective the reading of complex mixtures can be. In 2010, Dror and Hampikian obtained paperwork from a 2002 Georgia rape trial that hinged on DNA typing: The main evidence implicating the defendant was the accusation of a co-defendant who was testifying in exchange for a reduced sentence. Two forensic scientists had concluded that the defendant could not be excluded as a contributor to the mixture of sperm from inside the victim, meaning his DNA was a possible match; the defendant was found guilty.
Dror and Hampikian gave the DNA evidence to 17 lab technicians for examination, withholding context about the case to ensure unbiased results. All of the techs were experienced, with an average of nine years in the field. Dror and Hampikian asked them to determine whether the mixture included DNA from the defendant.

The Voorhes
In 2011, the results of the experiment were made public: Only one of the 17 lab technicians concurred that the defendant could not be excluded as a contributor. Twelve told Dror and Hampikian that the DNA was exclusionary, and four said that it was inconclusive. In other words, had any one of those 16 scientists been responsible for the original DNA analysis, the rape trial could have played out in a radically different way. Toward the end of the study, Dror and Hampikian quote the early DNA-testing pioneer Peter Gill, who once noted, “If you show 10 colleagues a mixture, you will probably end up with 10 different answers” as to the identity of the contributor. (The study findings are now at the center of the defendant’s motion for a new trial.)
“Ironically, you have a technology that was meant to help eliminate subjectivity in forensics,” Erin Murphy, a law professor at NYU, told me recently. “But when you start to drill down deeper into the way crime laboratories operate today, you see that the subjectivity is still there: Standards vary, training levels vary, quality varies.”
Last year, Murphy published a book called Inside the Cell: The Dark Side of Forensic DNA, which recounts dozens of cases of DNA typing gone terribly wrong. Some veer close to farce, such as the 15-year hunt for the Phantom of Heilbronn, whose DNA had been found at more than 40 crime scenes in Europe in the 1990s and early 2000s. The DNA in question turned out to belong not to a serial killer, but to an Austrian factory worker who made testing swabs used by police throughout the region. And some are tragic, like the tale of Dwayne Jackson, an African American teenager who pleaded guilty to robbery in 2003 after being presented with damning DNA evidence, and was exonerated years later, in 2011, after a police department in Nevada admitted that its lab had accidentally swapped Jackson’s DNA with the real culprit’s.
Most troubling, Murphy details how quickly even a trace of DNA can now become the foundation of a case. In 2012, police in California arrested Lukis Anderson, a homeless man with a rap sheet of nonviolent crimes, on charges of murdering the millionaire Raveesh Kumra at his mansion in the foothills outside San Jose. The case against Anderson started when police matched biological matter found under Kumra’s fingernails to Anderson’s DNA in a database. Anderson was held in jail for five months before his lawyer was able to produce records showing that Anderson had been in detox at a local hospital at the time of the killing; it turned out that the same paramedics who responded to the distress call from Kumra’s mansion had treated Anderson earlier that night, and inadvertently transferred his DNA to the crime scene via an oxygen-monitoring device placed on Kumra’s hand.
To Murphy, Anderson’s case demonstrates a formidable problem. Contamination is an obvious hazard when it comes to DNA analysis. But at least contamination can be prevented with care and proper technique. DNA transfer—the migration of cells from person to person, and between people and objects—is inevitable when we touch, speak, do the laundry. A 1996 study showed that sperm cells from a single stain on one item of clothing made their way onto every other item of clothing in the washer. And because we all shed different amounts of cells, the strongest DNA profile on an object doesn’t always correspond to the person who most recently touched it. I could pick up a knife at 10 in the morning, but an analyst testing the handle that day might find a stronger and more complete DNA profile from my wife, who was using it four nights earlier. Or the analyst might find a profile of someone who never touched the knife at all. One recent study asked participants to shake hands with a partner for two minutes and then hold a knife; when the DNA on the knives was analyzed, the partner was identified as a contributor in 85 percent of cases, and in 20 percent as the main or sole contributor.
Given rates of transfer, the mere presence of DNA at a crime scene shouldn’t be enough for a prosecutor to obtain a conviction. Context is needed. What worries experts like Murphy is that advancements in DNA testing are enabling ever more emphasis on ever less substantial evidence. A new technique known as low-copy-number analysis can derive a full DNA profile from as little as 10 trillionths of a gram of genetic material, by copying DNA fragments into a sample large enough for testing. The technique not only carries a higher risk of sample contamination and allele dropout, but could also implicate someone who never came close to the crime scene. Given the growing reliance on the codis database—which allows police to use DNA samples to search for possible suspects, rather than just to verify the involvement of existing suspects—the need to consider exculpatory evidence is greater than ever.
But Bicka Barlow, the San Francisco attorney, argues that the justice system now allows little room for caution. Techs at many state-funded crime labs have cops and prosecutors breathing down their necks for results—cops and prosecutors who may work in the same building. The threat of bias is everywhere. “An analyst might be told, ‘Okay, we have a suspect. Here’s the DNA. Look at the vaginal swab, and compare it to the suspect,’ ” Barlow says. “And they do, but they’re also being told all sorts of totally irrelevant things: The victim was 6 years old, the victim was traumatized, it was a hideous crime.”
Indeed, some analysts are incentivized to produce inculpatory forensic evidence: A recent study in the journal Criminal Justice Ethics notes that in North Carolina, state and local law-enforcement agencies operating crime labs are compensated $600 for DNA analysis that results in a conviction.
“I don’t think it’s unreasonable to point out that DNA evidence is being used in a system that’s had horrible problems with evidentiary reliability,” Murphy, who worked for several years as a public defender, told me. No dependable estimates exist for how many people have been falsely accused or imprisoned on the basis of faulty DNA evidence. But in Inside the Cell, she hints at the stakes: “The same broken criminal-justice system that created mass incarceration,” she writes, “and that has processed millions through its machinery without catching even egregious instances of wrongful conviction, now has a new and powerful weapon in its arsenal.”
The growing potential for mistakes in DNA testing has inspired a solution fitting for the digital age: automation, or the “complete removal of the human being from doing any subjective decision making,” as Mark Perlin, the CEO of the DNA-testing firm Cybergenetics, put it to me recently.
Perlin grew interested in DNA-typing techniques in the 1990s, while working as a researcher on genome technology at Carnegie Mellon, and spent some time reviewing recent papers on forensic usage. He was “really disappointed” by what he found, he told me: Faced with complex DNA mixtures, analysts too frequently arrived at flawed conclusions. An experienced coder, he set about designing software that could take some of the guesswork out of DNA profiling. It could also process results much faster. In 1996, Perlin waved goodbye to his post at Carnegie Mellon, and together with his wife, Ria David, and a small cadre of employees, focused on developing a program they dubbed TrueAllele.
At the core of TrueAllele is an algorithm: Data from DNA test strips are uploaded to a computer and run through an array of probability models until the software spits out a likelihood ratio—the probability, weighed against coincidence, that sample X is a match with sample Y. The idea, Perlin told me when I visited Cybergenetics headquarters, in Pittsburgh, was to correctly differentiate individual DNA profiles found at the scene of a crime. He gave me an example: A lab submits data from a complex DNA mixture found on a knife used in a homicide. The TrueAllele system might conclude that a match between the knife and a suspect is “5 trillion times more probable than coincidence,” and thus that the suspect almost certainly touched the knife. No more analysts squinting at their equipment, trying to correspond alleles with contributors. “Our program,” Perlin told me proudly, “is able to do all that for you, more accurately.”
Around us, half a dozen analysts and coders sat hunched over computer screens. The office was windowless and devoid of any kind of decoration, save for a whiteboard laced with equations—the vibe was more bootstrapped start-up than CSI. “I think visitors are surprised not to see bubbling vials and lab equipment,” Perlin acknowledged. “But that’s not us.”
He led me down the hallway and into a storage room. Row upon row of Cybergenetics-branded Apple desktop computers lined the shelves: ready-made TrueAllele kits. Perlin could not tell me exactly how many software units he sells each year, but he allowed that TrueAllele had been purchased by crime labs in Oman, Australia, and 11 U.S. states; last year, Cybergenetics hired its first full-time salesman.
Four years ago, in one of its more high-profile tests to date, the software was used to connect an extremely small trace of DNA at a murder scene in Schenectady, New York, to the killer, an acquaintance of the victim. A similarly reliable match, Perlin told me, would have been very difficult to obtain by more analog means.
And the software’s potential is only starting to be mined, he added. TrueAllele’s ability to pull matches from microscopic or muddled traces of DNA is helping crack cold cases, by reprocessing evidence once dismissed as inconclusive. “You hear the word inconclusive, you naturally think, Okay. It’s done,” Perlin told me, his eyes widening. “But it’s not! It just means [the lab technicians] can’t interpret it. Let me ask you: What’s the societal impact of half a crime lab’s evidence being called inconclusive and prosecutors and police and defenders mistakenly believing that this means it’s uninformative data?”
His critics have a darker view. William Thompson points out that Perlin has declined to make public the algorithm that drives the program. “You do have a black-box situation happening here,” Thompson told me. “The data go in, and out comes the solution, and we’re not fully informed of what happened in between.”
Last year, at a murder trial in Pennsylvania where TrueAllele evidence had been introduced, defense attorneys demanded that Perlin turn over the source code for his software, noting that “without it, [the defendant] will be unable to determine if TrueAllele does what Dr. Perlin claims it does.” The judge denied the request.
But TrueAllele is just one of a number of “probabilistic genotyping” programs developed in recent years—and as the technology has become more prominent, so too have concerns that it could be replicating the problems it aims to solve. The Legal Aid Society of New York recently challenged a comparable software program, the Forensic Statistical Tool, which was developed in-house by the city’s Office of the Chief Medical Examiner. The FST had been used to test evidence in hundreds of cases in the state, including an attempted-murder charge against a client of Jessica Goldthwaite, a Legal Aid attorney.
Goldthwaite knew little about DNA typing, but one of her colleagues at the time, Susan Friedman, had earned a master’s degree in biomedical science; another, Clinton Hughes, had been involved in several DNA cases. The three attorneys decided to educate themselves about the technology, and questioned half a dozen scientists. The responses were emphatic: “One population geneticist we consulted said what the [medical examiner] had made public about the FST read more like an ad than a scientific paper,” Hughes told me. Another called it a “random number generator.”
In 2011, Legal Aid requested a hearing to question whether the software met the Frye standard of acceptance by the larger scientific community. To Goldthwaite and her team, it seemed at least plausible that a relatively untested tool, especially in analyzing very small and degraded samples (the FST, like TrueAllele, is sometimes used to analyze low-copy-number evidence), could be turning up allele matches where there were none, or missing others that might have led technicians to an entirely different conclusion. And because the source code was kept secret, jurors couldn’t know the actual likelihood of a false match.
At the hearing, bolstered by a range of expert testimony, Goldthwaite and her colleagues argued that the FST, far from being established science, was an unknown quantity. (The medical examiner’s office refused to provide Legal Aid with the details of its code; in the end, the team was compelled to reverse-engineer the algorithm to show its flaws.)
Judge Mark Dwyer agreed. “Judges are, far and away, not the people best qualified to explain science,” he began his decision. Still, he added, efforts to legitimize the methods “must continue, if they are to persuade.” The FST evidence was ruled inadmissible.
Dwyer’s ruling did not have the weight of precedent: Other courts are free to accept evidence analyzed by probabilistic software—more and more of which is likely to enter the courtroom in the coming years—as they see fit. Still, Goldthwaite told me, the fact that one judge had been willing to question the new science suggested that others might too, and she and her team continue to file legal challenges.
When I interviewed Perlin at Cybergenetics headquarters, I raised the matter of transparency. He was visibly annoyed. He noted that he’d published detailed papers on the theory behind TrueAllele, and filed patent applications, too: “We have disclosed not the trade secrets of the source code or the engineering details, but the basic math.”
To Perlin, much of the criticism is a case of sour grapes. “In any new development in forensic science, there’s been incredible resistance to the idea that you’re going to rely on a validated machine to give you an accurate answer instead of relying on yourself and your expertise,” he told me.
In 2012, shortly after Legal Aid filed its challenge to the FST, two developers in the Netherlands, Hinda Haned and Jeroen de Jong, released LRmix Studio, free and open-source DNA-profiling software—the code is publicly available for other users to explore and improve.
Erin Murphy, of NYU, has argued that if probabilistic DNA typing is to be widely accepted by the legal community—and she believes that one day it should be—it will need to move in this direction: toward transparency.
“The problem with all DNA profiling is that there isn’t skepticism,” she told me. “There isn’t the necessary pressure. Is there increasing recognition of the shortcomings of old-school technology? Absolutely. Is there trepidation about the newer technology? Yes. But just because we’re moving forward doesn’t mean mistakes aren’t still being made.”

Mark Perlin, the CEO of Cybergenetics, at the company’s headquarters in Pittsburgh, April 18, 2016 (Jeff Swensen)
On April 3, 2014, the City of Houston shut down its old crime lab and transferred all DNA-testing operations to a new entity known as the Houston Forensic Science Center. Unlike its predecessor, which was overseen by the police department, the Forensic Science Center is intended to be an autonomous organization, with a firewall between it and other branches of law enforcement. “I think it’s important for the forensic side to have that independence, so we can narrow it down without worrying about which side is going to benefit or profit from it, just narrowing it down to what we think is the accurate information,” Daniel Garner, the center’s head, told a local reporter.
And yet Houston has been hard-pressed to leave its troubled history with forensic DNA behind. In June 2014, the Houston Chronicle reported that a former analyst at the old crime lab, Peter Lentz, had resigned after a Houston Police Department internal investigation found evidence of misconduct, including improper procedure, lying, and tampering with an official record. A representative from the county district attorney’s office told the Chronicle that her office was looking into all of the nearly 200 cases—including 51 murder cases—that Lentz had worked on during his time at the lab. (A grand jury declined to indict Lentz for any wrongdoing; he could not be reached for comment.)
“It’s almost 20 years later, and we’re still dealing with the repercussions,” Josiah Sutton’s mother, Carol Batie, told me earlier this year. “They say things are getting better, and maybe they are, but I always respond that it wasn’t fast enough to save Josiah.”
Before entering prison, Batie said, Sutton had been a promising football player, with a college career ahead of him. After his exoneration, he seemed stuck in a state of suspended animation. He was angry and resentful of authority. He drifted from job to job. He received an initial lump-sum payment from the city, as compensation for his wrongful conviction and the time he spent in prison, along with a much smaller monthly payout. But the lump-sum payment quickly vanished. He fathered five kids with five different women.
Batie called the city to ask about counseling for her son, but was told no such service was available. “I did my best to put myself in his shoes,” Batie said. “I was annoyed, but I knew he felt like the world was against him. Everyone had always given up on him. I couldn’t give up on him too.”
Last summer, Sutton was arrested for allegedly assaulting an acquaintance of his then-girlfriend. He spent the better part of a year in lockup before posting bail and is now awaiting trial. (Sutton denies the charges.) Batie believes that her son’s problems are a direct result of his incarceration in 1999. “He had his childhood stolen from him,” she told me. “No prom, no dating, no high-school graduation. Nothing. And he never recovered.”
I wondered whether Batie blamed DNA. She laughed. “Oh, no, honey,” she said. “DNA is science. You can’t blame DNA. You can only blame the people who used it wrong.”