Robots Are Doing Pretrial Risk Assessments, and They're Not Great at It

By Christopher Coble, Esq. on July 24, 2019

In the American legal system, you are presumed innocent until proven guilty. But that doesn't mean you can't be incarcerated before being proven guilty. Whether it's because courts have deemed them a flight or safety risk, or because they can't afford bail, almost half a million people are jailed before their criminal trials ever take place.

This kind of imprisonment, and its attendant costs (loss of income, a job, and even a place to live), runs counter to our guiding principles of criminal justice, and reformers have been proposing fixes, like eliminating money bail and only detaining people based on their risk of harm to others or fleeing their trial. This sounds great, especially when we can harness the latest technology to make those risk assessments instead of fallible judges or magistrates. But it turns out the machines aren't any better at this process than man, so where does that leave us?

Fundamentally Flawed

"As researchers in the fields of sociology, data science and law, we believe pretrial risk assessment tools are fundamentally flawed," the New York Times Op-Ed page declares. "They give judges recommendations that make future violence seem more predictable and more certain than it actually is. In the process, risk assessments may perpetuate the misconceptions and fears that drive mass incarceration."

The article, written by Chelsea Barabas, Karthik Dinakar, and Colin Doyle, notes the explosion of the U.S.'s pretrial detention population ("There are more legally innocent people behind bars in America today than there were convicted people in jails and prisons in 1980") and the attractiveness of algorithmic risk assessments to either reduce the number of people detained while awaiting trial or to make more accurate predictions about a defendant's likelihood to commit a violent crime while released. "Algorithmic risk assessments are touted as being more objective and accurate than judges in predicting future violence," they note. "Across the political spectrum, these tools have become the darling of bail reform. But their success rests on the hope that risk assessments can be a valuable course corrector for judges' faulty human intuition."

Instead, risk assessment AI massively overstates the risk pretrial release poses to the public:

Risk assessments are virtually useless for identifying who will commit violence if released pretrial. Consider the pre-eminent risk assessment tool on the market today, the Public Safety Assessment, or P.S.A., adopted in New Jersey, Kentucky and various counties across the country. In these jurisdictions, the P.S.A. assesses every person accused of a crime and flags them as either at risk for "new violent criminal activity" or not. A judge sees whether the person has been flagged for violence and, depending on the jurisdiction, may receive an automatic recommendation to release or detain.
Risk assessments' simple labels obscure the deep uncertainty of their actual predictions. Largely because pretrial violence is so rare, it is virtually impossible for any statistical model to identify people who are more likely than not to commit a violent crime.

Detention and Due Process

MIT's Technology Review similarly found that risk assessment tools that are based on historical crime data risk calcifying the same prejudices we hoped to avoid by using tech in the first place, and "turn correlative insights into causal scoring mechanisms." And the Partnership on AI spotted the same issues. "Using risk assessment tools to make fair decisions about human liberty would require solving deep ethical, technical, and statistical challenges, including ensuring that the tools are designed and built to mitigate bias at both the model and data layers, and that proper protocols are in place to promote transparency and accountability," it noted it its Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System. "The tools currently available and under consideration for widespread use suffer from several of these failures."

Additionally, Judge Noel L. Hillman of the United States District Court for the District of New Jersey warned that the use of AI at sentencing may "violate basic tenets of due process," and reliance on such risk assessment tools "improperly cedes the discretionary sentencing power to nonjudicial entities." " In short," Judge Hillman notes, "the use of AI at sentencing is potentially unfair, unwise, and an imprudent abdication of the judicial function."

Still, risk assessment tech is in use across the country, and the Wisconsin Supreme Court (one of the only to address the issue) determined that "if used properly, observing the limitations and cautions set forth herein, a circuit court's consideration of a ... risk assessment at sentencing does not violate a defendant's right to due process."

It's possible that if you've been charged with a crime, a judge could consider a risk assessment tool when decided whether you can be released before your trial. And, in some jurisdictions, a judge must take into account such an assessment. So, if you've been charged with a crime, contact an experienced criminal defense attorney for help.

Related Resources:

Copied to clipboard