{"id":5450,"date":"2018-02-02T17:15:21","date_gmt":"2018-02-02T21:15:21","guid":{"rendered":"http:\/\/ncjolt.org\/?p=5450"},"modified":"2020-06-04T20:52:34","modified_gmt":"2020-06-04T20:52:34","slug":"artificial-intelligences-emerging-threat-human-rights","status":"publish","type":"post","link":"https:\/\/journals.law.unc.edu\/ncjolt\/blogs\/artificial-intelligences-emerging-threat-human-rights\/","title":{"rendered":"Artificial Intelligence\u2019s Emerging Threat to Human Rights"},"content":{"rendered":"<p>In the wake of consequential 2016 election, during which artificial intelligence was used to potentially <a href=\"https:\/\/www.nytimes.com\/2017\/09\/07\/us\/politics\/russia-facebook-twitter-election.html\">influence voters<\/a>, deeper questions about AI present themselves\u2014one of which is: can AI threaten human rights? The answer is most certainly a resounding yes, because it <a href=\"https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing\">already has<\/a>.<br \/>\nTo be clear, this is not a suggestion that robots are, on their own, making racist or sexist decisions; the fault still lies <a href=\"https:\/\/newrepublic.com\/article\/144644\/turns-algorithms-racist\">entirely<\/a> with humans. In layman\u2019s terms, AI technologies (which includes everything from computers and software to <a href=\"http:\/\/sophiabot.com\/about-me\/\">Sophia the Robot<\/a>) still come down to basic <a href=\"https:\/\/newrepublic.com\/article\/144644\/turns-algorithms-racist\">input-output systems<\/a>, in which the input is a large amount of pre-existing information. Whatever bias exists in the input data will translate to the output, by default.<\/p>\n<blockquote><p>Thus, our software is literally programmed to \u201creplicate the injustices of the past,\u201d unless something is done to change that effect.<\/p><\/blockquote>\n<p><strong>\u00a0<\/strong>In one case, an image recognition software was producing <a href=\"https:\/\/www.wired.com\/story\/machines-taught-by-photos-learn-a-sexist-view-of-women\/\">sexist results<\/a>. The software was input with research-image collections and began associating actions such as cooking and cleaning with women, while associating actions like shooting and playing sports with men. When the researchers investigated further, they discovered that it was because the software found a <a href=\"https:\/\/www.wired.com\/story\/machines-taught-by-photos-learn-a-sexist-view-of-women\/\">pattern of bias<\/a> which already existed in the image collections, and that bias was amplified as the software trained itself based on the photos.<br \/>\nWhile a software used only by researchers may not seem to pose a human rights issue, consider an <a href=\"https:\/\/www.wired.com\/2016\/10\/clarifai-wants-correct-ais-biggest-gaffes\/\">incident<\/a> Google had with a similar image recognition software: tagging pictures of black people as gorillas. Or in another case, a program to help police departments find crime hotspots was shown to be <a href=\"https:\/\/www.theguardian.com\/inequality\/2017\/aug\/08\/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses\">vulnerable to over-policing<\/a> predominantly black and brown neighborhoods, due to the input of previous crime reports\u2014even if those reports were the result of over-policing. In yet another example, Linked In\u2019s search function has a <a href=\"https:\/\/www.seattletimes.com\/business\/microsoft\/how-linkedins-search-engine-may-reflect-a-bias\/\">bias<\/a> for male names. In each of these examples, the AI technology learned its behavior from some of the information given to it, then turned that behavior around and reproduced it in all of the results. Therein lies the danger; even when only some of the inputs contain the dangerous biases and prejudices, as soon as the machine learns them, those biases and prejudices become part of output.<br \/>\nThis is especially troublesome if the technology were put in place in order to eliminate biases. Automated evaluation is likely to be <a href=\"https:\/\/www.theguardian.com\/inequality\/2017\/aug\/08\/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses\">most damaging<\/a> to the more vulnerable individuals in a society, who are the most likely to be evaluated by an automated system\u2014especially with these technologies being utilized <a href=\"https:\/\/newrepublic.com\/article\/144644\/turns-algorithms-racist\">more and more<\/a> within the justice system arena. But despite the clear danger for significant future problems, it\u2019s important not to get caught up solely in what could go wrong. With the problem identified, at least to some degree, researchers and developers can focus not only on ways to <a href=\"https:\/\/newrepublic.com\/article\/144644\/turns-algorithms-racist\">remediate<\/a> the technology, but also <a href=\"https:\/\/newrepublic.com\/article\/144644\/turns-algorithms-racist\">to learn<\/a> from it and utilize it. These systems, by their very nature, can help identify where biases and inequality exists in different sectors of society. The question then becomes: once it is known, what does society do to fix it?<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In the wake of consequential 2016 election, during which artificial intelligence was used to potentially influence voters, deeper questions about AI present themselves\u2014one of which is: can AI threaten human rights? The answer is most certainly a resounding yes, because it already has. To be clear, this is not a suggestion that robots are, on <a href=\"https:\/\/journals.law.unc.edu\/ncjolt\/blogs\/artificial-intelligences-emerging-threat-human-rights\/\" class=\"more-link\">&#8230;<\/a><\/p>\n","protected":false},"author":1,"featured_media":5451,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[51],"tags":[],"_links":{"self":[{"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/posts\/5450"}],"collection":[{"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/comments?post=5450"}],"version-history":[{"count":1,"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/posts\/5450\/revisions"}],"predecessor-version":[{"id":6994,"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/posts\/5450\/revisions\/6994"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/media\/5451"}],"wp:attachment":[{"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/media?parent=5450"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/categories?post=5450"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/tags?post=5450"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}