{"id":9477,"date":"2025-01-27T14:54:15","date_gmt":"2025-01-27T14:54:15","guid":{"rendered":"https:\/\/journals.law.unc.edu\/ncjolt\/?p=9477"},"modified":"2025-01-27T14:54:15","modified_gmt":"2025-01-27T14:54:15","slug":"from-progress-to-peril-the-risks-of-trumps-ai-executive-order","status":"publish","type":"post","link":"https:\/\/journals.law.unc.edu\/ncjolt\/blogs\/from-progress-to-peril-the-risks-of-trumps-ai-executive-order\/","title":{"rendered":"From Progress to Peril: The Risks of Trump\u2019s AI Executive Order"},"content":{"rendered":"\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" width=\"529\" height=\"299\" src=\"https:\/\/journals.law.unc.edu\/ncjolt\/wp-content\/uploads\/sites\/4\/2025\/01\/AI_Risk.png\" alt=\"\" class=\"wp-image-9478\" srcset=\"https:\/\/journals.law.unc.edu\/ncjolt\/wp-content\/uploads\/sites\/4\/2025\/01\/AI_Risk.png 529w, https:\/\/journals.law.unc.edu\/ncjolt\/wp-content\/uploads\/sites\/4\/2025\/01\/AI_Risk-300x170.png 300w\" sizes=\"(max-width: 529px) 100vw, 529px\" \/><\/figure>\n\n\n\n<p>On January 23, 2025, President Trump signed a sweeping <a href=\"https:\/\/www.whitehouse.gov\/fact-sheets\/2025\/01\/fact-sheet-president-donald-j-trump-takes-action-to-enhance-americas-ai-leadership\/\">executive order<\/a> on artificial intelligence, marking a dramatic shift in U.S. artificial intelligence policy. Upon returning to office, one of his first major actions was the order to repeal President Biden\u2019s landmark 2023 AI directive. <a href=\"https:\/\/apnews.com\/article\/biden-ai-artificial-intelligence-executive-order-cb86162000d894f238f28ac029005059\">Biden\u2019s order<\/a> had aimed to confront systemic bias in AI, requiring developers and agencies to audit algorithms for discriminatory patterns against <a href=\"https:\/\/apnews.com\/article\/trump-ai-artificial-intelligence-executive-order-eef1e5b9bec861eaf9b36217d547929c\">race, gender, and disability<\/a>. It also targeted predictive policing tools that led to the <a href=\"https:\/\/apnews.com\/article\/trump-ai-artificial-intelligence-executive-order-eef1e5b9bec861eaf9b36217d547929c\">over-criminalization of Black communities<\/a> while creating pathways for individuals harmed by AI to challenge its legality.<\/p>\n\n\n\n<p>Trump\u2019s new directive, however, strips away these safeguards, citing a need to \u201cfree\u201d AI from <a href=\"https:\/\/www.whitehouse.gov\/fact-sheets\/2025\/01\/fact-sheet-president-donald-j-trump-takes-action-to-enhance-americas-ai-leadership\/\">\u201cideological bias\u201d and \u201cengineered social agendas.\u201d<\/a> At first glance, this language might sound neutral, even empowering\u2014but what does it really mean? Who defines \u201cideological bias,\u201d and whose agendas are being \u201cengineered\u201d? Without clear definitions, Trump\u2019s order risks dismantling protections that could prevent biased AI from further harming vulnerable populations.<\/p>\n\n\n\n<p>The dangers of unregulated AI are far from hypothetical. Take Amazon\u2019s attempt to <a href=\"https:\/\/www.reuters.com\/article\/world\/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG\/\">build a hiring algorithm<\/a>, for example. The company trained its AI on a decade\u2019s worth of resumes, the majority of which came from men. The result? Resumes containing the word \u201cwomen\u2019s\u201d\u2014as in <a href=\"https:\/\/www.aclu.org\/news\/womens-rights\/why-amazons-automated-hiring-tool-discriminated-against\">\u201cwomen\u2019s rugby team\u201d<\/a> or \u201cwomen\u2019s college\u201d\u2014were ranked lower. This sophisticated tool didn\u2019t eliminate bias; it reinforced it, penalizing candidates simply for their gender.<\/p>\n\n\n\n<p>Or consider HireVue, an AI hiring software that uses <a href=\"https:\/\/www.bloomberglaw.com\/bloomberglawnews\/daily-labor-report\/X9RO6FCO000000?bna_news_filter=daily-labor-report#jcite\">facial tracking<\/a> to evaluate job candidates\u2019 <a href=\"https:\/\/www.bloomberglaw.com\/bloomberglawnews\/daily-labor-report\/X9RO6FCO000000?bna_news_filter=daily-labor-report#jcite\">integrity and competence<\/a>. While marketed as innovative, the system has been criticized for disproportionately harming people with disabilities. Facial tracking algorithms misinterpret non-standard expressions or movements, effectively shutting out candidates based on physical characteristics irrelevant to their ability to perform a job.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\"><p>With the removal of Biden-era safeguards under Trump\u2019s executive order, AI\u2019s promise as a tool for progress will continue to be overshadowed by its capacity for harm.<\/p><\/blockquote>\n\n\n\n<p>The harms extend far beyond the workplace. <a href=\"https:\/\/themarkup.org\/prediction-bias\/2023\/10\/02\/predictive-policing-software-terrible-at-predicting-crimes\">PredPol<\/a>, a predictive policing tool, promises to forecast crime but often exacerbates racial disparities. It relies on historical crime data, which is already tainted by decades of <a href=\"https:\/\/www.futurity.org\/police-patrol-black-hispanic-neighborhoods-2990582\/\">over-policing in Black and Brown neighborhoods<\/a>. This creates a vicious feedback loop: police are sent to these areas more often, leading to more arrests, further skewing the data and perpetuating the cycle.<\/p>\n\n\n\n<p><a href=\"https:\/\/www.theverge.com\/c\/22444020\/chicago-pd-predictive-policing-heat-list\">Chicago\u2019s Heat List<\/a> offers another sobering example. This program flagged Robert McDaniel, a young Black man with no violent criminal record, as \u201chigh risk\u201d simply because he lived in a heavily policed neighborhood. The algorithm subjected him to relentless surveillance and police scrutiny based on assumptions about his environment, not his actions. Similarly, Randal Quran Reid was<a href=\"https:\/\/apnews.com\/article\/mistaken-arrests-facial-recognition-technology-lawsuits-b613161c56472459df683f54320d08a7\"> wrongfully arrested<\/a> after a flawed facial recognition tool misidentified him as the suspect in a Louisiana theft\u2014a crime he had no connection to.<\/p>\n\n\n\n<p>President Trump\u2019s AI order becomes even more alarming when viewed alongside his political alliances. He has aligned himself with<a href=\"https:\/\/apnews.com\/article\/elon-musk-politics-trump-7e26c829af224a1f9d67c27cea085e68\"> Elon Musk<\/a>, whose companies, including Tesla and Twitter, have been criticized for <a href=\"https:\/\/intpolicydigest.org\/the-ethical-implications-of-elon-musk-and-tesla-s-contempt-for-public-safety\/\">dismissing ethical concerns<\/a>. Add to that the <a href=\"https:\/\/apnews.com\/article\/trump-inauguration-tech-billionaires-zuckerberg-musk-wealth-0896bfc3f50d941d62cebc3074267ecd\">likely influence of<\/a> tech giants like Google, Amazon, Microsoft, and Meta, all of which stand to benefit from deregulation, and the picture grows even bleaker. With corporate interests now <a href=\"https:\/\/en.goobjoog.com\/tech-titans-and-trumps-america-the-global-risks-of-an-unlikely-alliance\/\">likely steering policy<\/a>, the risks of unchecked AI grow exponentially.<\/p>\n\n\n\n<p>With the removal of Biden-era safeguards under Trump\u2019s executive order, AI\u2019s promise as a tool for progress will continue to be overshadowed by its capacity for harm. If history has shown us anything\u2014from Amazon\u2019s biased hiring tool to Robert McDaniel\u2019s unwarranted surveillance\u2014it is that <a href=\"https:\/\/hbr.org\/2019\/10\/what-do-we-do-about-the-biases-in-ai\">AI systems are far from neutral<\/a>. They reflect the biases of their creators and the data they are trained on. Removing accountability and transparency only makes it easier for these systems to perpetuate systemic discrimination.<\/p>\n\n\n\n<p>As we move into this new era of AI policy, the question is no longer whether AI can be made fair. It is whether we, as a society, will demand fairness\u2014or allow vague rhetoric and unchecked corporate influence to define the safety of the technological world we call home.<\/p>\n\n\n\n<p><strong>Mariam Syed<\/strong><\/p>\n\n\n\n<p>Mariam attended the University of Virginia and majored in Anthropology with a concentration in public health and bioethics. At UNC law, Mariam is a Dean\u2019s Fellow, a staff member of the North Carolina Journal of Law and Technology, and is the community outreach coordinator for the Asian American Law Students Association.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>On January 23, 2025, President Trump signed a sweeping executive order on artificial intelligence, marking a dramatic shift in U.S. artificial intelligence policy. Upon returning to office, one of his first major actions was the order to repeal President Biden\u2019s landmark 2023 AI directive. Biden\u2019s order had aimed to confront systemic bias in AI, requiring <a href=\"https:\/\/journals.law.unc.edu\/ncjolt\/blogs\/from-progress-to-peril-the-risks-of-trumps-ai-executive-order\/\" class=\"more-link\">&#8230;<\/a><\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[51],"tags":[605,297,417,607,608,606],"_links":{"self":[{"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/posts\/9477"}],"collection":[{"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/comments?post=9477"}],"version-history":[{"count":1,"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/posts\/9477\/revisions"}],"predecessor-version":[{"id":9479,"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/posts\/9477\/revisions\/9479"}],"wp:attachment":[{"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/media?parent=9477"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/categories?post=9477"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/tags?post=9477"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}