{"id":6010,"date":"2019-02-04T14:31:40","date_gmt":"2019-02-04T18:31:40","guid":{"rendered":"http:\/\/ncjolt.org\/?p=6010"},"modified":"2020-06-04T20:52:28","modified_gmt":"2020-06-04T20:52:28","slug":"artificial-intelligence-could-lead-to-a-more-equitable-judiciary","status":"publish","type":"post","link":"https:\/\/journals.law.unc.edu\/ncjolt\/blogs\/artificial-intelligence-could-lead-to-a-more-equitable-judiciary\/","title":{"rendered":"Artificial Intelligence Could Lead to a More Equitable Judiciary"},"content":{"rendered":"\n<p><a href=\"https:\/\/users.nber.org\/~dlchen\/cv.pdf\">Daniel Chen<\/a>, a professor and researcher at the\nToulouse School of Economics and a member of the University of Toulouse Faculty\nof Law in Toulouse, France, recently suggested in a <a href=\"https:\/\/poseidon01.ssrn.com\/delivery.php?ID=782003117027014005125120094021121089052016006059021006029098001127094075011111017092034010029016033047045124064025030069005097051022030047038031079028020091127006020069011090102015101124123098085107078114091102092011023101119028108099074019100125105&amp;EXT=pdf\">working paper<\/a>\nthat artificial intelligence has the potential to show judges how their\nbehavioral biases inform their judicial decision-making.<\/p>\n\n\n<p>He begins\nthe paper by highlighting the most statistically apparent flaws of the American\njudiciary\u2019s human nature, such as that judges at the appellate level <a href=\"https:\/\/poseidon01.ssrn.com\/delivery.php?ID=526026122121112004073064019067009004021019084010061003029098000119084101094126096098117019034042056022028000090066065004016122105078049036082098021110006080019075085012056071108084085091016093116125098010117012066071066096123122118121068020027120020&amp;EXT=pdf\">become more politicized before and\nelection<\/a> and <a href=\"https:\/\/poseidon01.ssrn.com\/delivery.php?ID=446005007069125116122097020093113093121054088068002056077030108087122116088065081110030012116061009062052103003119091004100091039039001011046067112001123075088094019053037089092001101001086074127101091088084095109097116122065026011082001071023004078&amp;EXT=pdf\">dissent less during wartime<\/a>, or that judges are <a href=\"https:\/\/www.nber.org\/papers\/w22026.pdf\">2 percentage points more likely<\/a>\nto deny asylum to refugees if their previous decision granted asylum.<\/p>\n\n\n<p>As a solution to the problem of implicit biases in human decision-making, Chen argues that statistical analysis of judicial decisions and machine learning based on datasets produced from those statistics can be used to predict future decisions, informing judges of their biases before they rule based on them.<\/p>\n\n\n<blockquote class=\"wp-block-quote\"><p>. . . as a judge naturally strays from the legally optimal outcome, the influence of extralegal factors in his or her decision making grows. <\/p><\/blockquote>\n\n\n<p>Statistically,\nhe argues, judges are more likely to be influenced by their biases when they\nare more indifferent to the outcome of a given case.&nbsp; There is substantial literature discussing\nthe extralegal factors that weigh into how judges decide, but aren\u2019t particular\nto a given judge, such as environmental factors, like the weather or whether\ntheir favorite sports team .&nbsp; Chen argues\nthat as a judge naturally strays from the legally optimal outcome, the\ninfluence of extralegal factors in his or her decision making grows.<\/p>\n\n\n<p>His\nsolution involves training artificial intelligence to detect instances where\njudicial indifference is present, particularly when judges \u201cappear to ignore\nthe circumstances of the case when making decisions.\u201d&nbsp; When AI picks up on judicial indifference or\nthe use of other non-legal sources in decision-making, he argues, interventions\nshould be staged before the judge rules.&nbsp;\nHe writes, \u201cInforming judges about the predictions made by a model decision\nmaker could help reduce judge-level variation and arbitrariness. Potential\nbiases that have been identified in prior decisions or writing could be brought\nto a judge\u2019s attention, where they could be subjected to higher order cognitive\nscrutiny. Such efforts would build on the already significant push to integrate\nrisk-assessment into the criminal justice process to help inform judges of the\nobjective risks posed by defendants.\u201d<\/p>\n\n\n<p>He also suggests that artificial intelligence can be used to develop general judicial training methods in addition to the targeted ones discussed above.&nbsp; <a href=\"https:\/\/www.nber.org\/papers\/w19765.pdf\">Citing a study<\/a> that found that racial bias decreased among NBA referees who were generally aware of racial bias in their in-game decision-making.&nbsp; Applying this to judges, he writes that the \u201cgoal would be to educate legal decision makers in the tools of data analysis, so that they can become better consumers of this information when it is present during legal proceedings, and to more generally provide a set of thinking tools for understanding inference, prediction, and the conscious and unconscious factors that may influence their decision making.\u201d In an interview with <a href=\"https:\/\/www.theverge.com\/2019\/1\/17\/18186674\/daniel-chen-machine-learning-rule-of-law-economics-psychology-judicial-system-policy\">The Verge,<\/a> Chen responds to some of the criticism he\u2019s received about allowing AI to influence judicial decision-making: \u201cThere\u2019s certainly a lot of interest in how algorithms can improve decision-making. I\u2019ve also been thinking about how and why people are so resistant to this idea of predictions and machines assisting in judgment. I think it\u2019s a little related to the fact that people like to think we\u2019re unique and so being compared to someone else in this way isn\u2019t quite recognizing my individuality and dignity. On the one hand, people might just get used to big data helping judges make decisions. On the other, I\u2019m an individual, so don\u2019t treat me like yet another data point.\u201d <\/p>\n\n\n<p>Ryan Bullard, 21 January 2019<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Daniel Chen, a professor and researcher at the Toulouse School of Economics and a member of the University of Toulouse Faculty of Law in Toulouse, France, recently suggested in a working paper that artificial intelligence has the potential to show judges how their behavioral biases inform their judicial decision-making. He begins the paper by highlighting <a href=\"https:\/\/journals.law.unc.edu\/ncjolt\/blogs\/artificial-intelligence-could-lead-to-a-more-equitable-judiciary\/\" class=\"more-link\">&#8230;<\/a><\/p>\n","protected":false},"author":1,"featured_media":5597,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[51],"tags":[],"_links":{"self":[{"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/posts\/6010"}],"collection":[{"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/comments?post=6010"}],"version-history":[{"count":1,"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/posts\/6010\/revisions"}],"predecessor-version":[{"id":6871,"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/posts\/6010\/revisions\/6871"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/media\/5597"}],"wp:attachment":[{"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/media?parent=6010"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/categories?post=6010"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/tags?post=6010"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}