From Progress to Peril: The Risks of Trump’s AI Executive Order

On January 23, 2025, President Trump signed a sweeping executive order on artificial intelligence, marking a dramatic shift in U.S. artificial intelligence policy. Upon returning to office, one of his first major actions was the order to repeal President Biden’s landmark 2023 AI directive. Biden’s order had aimed to confront systemic bias in AI, requiring developers and agencies to audit algorithms for discriminatory patterns against race, gender, and disability. It also targeted predictive policing tools that led to the over-criminalization of Black communities while creating pathways for individuals harmed by AI to challenge its legality.
Trump’s new directive, however, strips away these safeguards, citing a need to “free” AI from “ideological bias” and “engineered social agendas.” At first glance, this language might sound neutral, even empowering—but what does it really mean? Who defines “ideological bias,” and whose agendas are being “engineered”? Without clear definitions, Trump’s order risks dismantling protections that could prevent biased AI from further harming vulnerable populations.
The dangers of unregulated AI are far from hypothetical. Take Amazon’s attempt to build a hiring algorithm, for example. The company trained its AI on a decade’s worth of resumes, the majority of which came from men. The result? Resumes containing the word “women’s”—as in “women’s rugby team” or “women’s college”—were ranked lower. This sophisticated tool didn’t eliminate bias; it reinforced it, penalizing candidates simply for their gender.
Or consider HireVue, an AI hiring software that uses facial tracking to evaluate job candidates’ integrity and competence. While marketed as innovative, the system has been criticized for disproportionately harming people with disabilities. Facial tracking algorithms misinterpret non-standard expressions or movements, effectively shutting out candidates based on physical characteristics irrelevant to their ability to perform a job.
With the removal of Biden-era safeguards under Trump’s executive order, AI’s promise as a tool for progress will continue to be overshadowed by its capacity for harm.
The harms extend far beyond the workplace. PredPol, a predictive policing tool, promises to forecast crime but often exacerbates racial disparities. It relies on historical crime data, which is already tainted by decades of over-policing in Black and Brown neighborhoods. This creates a vicious feedback loop: police are sent to these areas more often, leading to more arrests, further skewing the data and perpetuating the cycle.
Chicago’s Heat List offers another sobering example. This program flagged Robert McDaniel, a young Black man with no violent criminal record, as “high risk” simply because he lived in a heavily policed neighborhood. The algorithm subjected him to relentless surveillance and police scrutiny based on assumptions about his environment, not his actions. Similarly, Randal Quran Reid was wrongfully arrested after a flawed facial recognition tool misidentified him as the suspect in a Louisiana theft—a crime he had no connection to.
President Trump’s AI order becomes even more alarming when viewed alongside his political alliances. He has aligned himself with Elon Musk, whose companies, including Tesla and Twitter, have been criticized for dismissing ethical concerns. Add to that the likely influence of tech giants like Google, Amazon, Microsoft, and Meta, all of which stand to benefit from deregulation, and the picture grows even bleaker. With corporate interests now likely steering policy, the risks of unchecked AI grow exponentially.
With the removal of Biden-era safeguards under Trump’s executive order, AI’s promise as a tool for progress will continue to be overshadowed by its capacity for harm. If history has shown us anything—from Amazon’s biased hiring tool to Robert McDaniel’s unwarranted surveillance—it is that AI systems are far from neutral. They reflect the biases of their creators and the data they are trained on. Removing accountability and transparency only makes it easier for these systems to perpetuate systemic discrimination.
As we move into this new era of AI policy, the question is no longer whether AI can be made fair. It is whether we, as a society, will demand fairness—or allow vague rhetoric and unchecked corporate influence to define the safety of the technological world we call home.
Mariam Syed
Mariam attended the University of Virginia and majored in Anthropology with a concentration in public health and bioethics. At UNC law, Mariam is a Dean’s Fellow, a staff member of the North Carolina Journal of Law and Technology, and is the community outreach coordinator for the Asian American Law Students Association.