Artificial Intelligence Could Lead to a More Equitable Judiciary
Daniel Chen, a professor and researcher at the Toulouse School of Economics and a member of the University of Toulouse Faculty of Law in Toulouse, France, recently suggested in a working paper that artificial intelligence has the potential to show judges how their behavioral biases inform their judicial decision-making.
He begins the paper by highlighting the most statistically apparent flaws of the American judiciary’s human nature, such as that judges at the appellate level become more politicized before and election and dissent less during wartime, or that judges are 2 percentage points more likely to deny asylum to refugees if their previous decision granted asylum.
As a solution to the problem of implicit biases in human decision-making, Chen argues that statistical analysis of judicial decisions and machine learning based on datasets produced from those statistics can be used to predict future decisions, informing judges of their biases before they rule based on them.
. . . as a judge naturally strays from the legally optimal outcome, the influence of extralegal factors in his or her decision making grows.
Statistically, he argues, judges are more likely to be influenced by their biases when they are more indifferent to the outcome of a given case. There is substantial literature discussing the extralegal factors that weigh into how judges decide, but aren’t particular to a given judge, such as environmental factors, like the weather or whether their favorite sports team . Chen argues that as a judge naturally strays from the legally optimal outcome, the influence of extralegal factors in his or her decision making grows.
His solution involves training artificial intelligence to detect instances where judicial indifference is present, particularly when judges “appear to ignore the circumstances of the case when making decisions.” When AI picks up on judicial indifference or the use of other non-legal sources in decision-making, he argues, interventions should be staged before the judge rules. He writes, “Informing judges about the predictions made by a model decision maker could help reduce judge-level variation and arbitrariness. Potential biases that have been identified in prior decisions or writing could be brought to a judge’s attention, where they could be subjected to higher order cognitive scrutiny. Such efforts would build on the already significant push to integrate risk-assessment into the criminal justice process to help inform judges of the objective risks posed by defendants.”
He also suggests that artificial intelligence can be used to develop general judicial training methods in addition to the targeted ones discussed above. Citing a study that found that racial bias decreased among NBA referees who were generally aware of racial bias in their in-game decision-making. Applying this to judges, he writes that the “goal would be to educate legal decision makers in the tools of data analysis, so that they can become better consumers of this information when it is present during legal proceedings, and to more generally provide a set of thinking tools for understanding inference, prediction, and the conscious and unconscious factors that may influence their decision making.” In an interview with The Verge, Chen responds to some of the criticism he’s received about allowing AI to influence judicial decision-making: “There’s certainly a lot of interest in how algorithms can improve decision-making. I’ve also been thinking about how and why people are so resistant to this idea of predictions and machines assisting in judgment. I think it’s a little related to the fact that people like to think we’re unique and so being compared to someone else in this way isn’t quite recognizing my individuality and dignity. On the one hand, people might just get used to big data helping judges make decisions. On the other, I’m an individual, so don’t treat me like yet another data point.”
Ryan Bullard, 21 January 2019