Risky AI: A Brief look at EU’s proposed Artificial Intelligence Act
This year, the European Union and the United States have taken steps to regulate artificial intelligence (AI) systems. The EU and US governments, however, have yet to correspond on the type of regulation required by AIs.
EU leadership has bolstered passing “the first initiative, worldwide, that provides legal framework for Artificial Intelligence,” even though the proposal for the Artificial Intelligence Act’s (AIA) passage comes about 3 months after the National Artificial Intelligence Initiative Act was codified in the United States. How can these two things be reconciled? Well, the US’s and EU’s legislation are regulating artificial intelligence in different ways.
In the United States, the National AI Initiative Act merely established a pathway for “coordinating AI research and policy across the federal government and a national network of AI research institutes” by establishing a National AI Initiative Office. In sum, the US has yet to propose any legal rule for AI while the EU has proposed new and comprehensive legislation to govern AI systems.
A main controversy surrounding EU’s AIA proposal is the overtly broad definition of “Artificial Intelligence”. The proposal defines an artificial intelligence system as the means when software is developed with one or more of the techniques and approaches listed, and that software can generate outputs such as content, predictions, recommendations, or decisions. While the commission can amend the list of techniques and approaches considered for the AIA, the current list includes the following: machine learning approaches that use a wide variety of methods, logic- and knowledge-based approaches, and statistical approaches such as Bayesian estimation.
This definition of artificial intelligence has been criticized as “hopelessly vague”, and some people have claimed that the EU is proposing to regulate the use of Bayesian estimation itself.
This definition of artificial intelligence has been criticized as “hopelessly vague”, and some people have claimed that the EU is proposing to regulate the use of Bayesian estimation itself. It is worth clarifying that a Bayesian estimation will only fall under the AIA proposal if the estimations contribute to software that outputs something of value.
Still, most sources agree that the proposal’s definition is broad albeit in alignment with the legislative goal to capture “software embodying machine learning, the older rules-based AI approach, and also the traditional statistical techniques that have long been used in creating models.” The European Commission limits the effect of the broad AI definition by incorporating a risk-based approach as the core of the proposal. The aim is to impose regulatory burdens only when an AI system is likely to pose high risks to fundamental rights and safety.
The risk categories of AI identified in the proposal are as follows: unacceptable risk (strictly prohibited), high risk, limited risk, and minimal risk. For high-risk AI systems, the act requires high-quality data, documentation and recording keeping, transparency and provision of information to users, human oversight, robustness, accuracy, and security. Prohibited artificial intelligence covers four main categories: social scoring, dark-pattern AI, manipulation, and real-time biometric identification systems.

The social scoring provision appears to be a reaction to China’s social credit system, which uses big data gathered about both individuals and entities to improve and consolidate Chinese Communist Party rule. Dark patterned AI systems refer to technologies that deploy subliminal techniques to distort a person’s behavior. The proposal’s ban on “manipulation” covers AI systems that exploit any vulnerabilities of a specific group in order to distort the behavior of a person.
In addressing the public concern of AI systems used for law enforcement, the AIA prohibits real-time biometric identification systems unless use falls under three broad exceptions. Those exceptions are a search for potential victims of crime, certain threats to life, and the detection/localization/identification/prosecution of suspects of the crime. AI use in law enforcement persists as a major concern, and it’s unclear whether the act’s exceptions are too broad to address that public concern, especially since AI can still be used for detecting suspects of any crime.
The AIA has a long way to becoming law in the European Union, but it is a comprehensive document that clearly sets out EU’s objectives for regulating artificial intelligence. It will be interesting to see if a US regulation proposal with considerations of findings by the National AI Initiative Office will mimic a similar tiered-risk approach. The software community will be eagerly awaiting a United States regulation definition of Artificial Intelligence while much of the legal community will also be paying particular attention to United States regulations covering AI and law enforcement.
While the US has not indicated what future AI regulation may look like, the new EU-US agenda for global change (published in late 2020) included the two countries working together on a Transatlantic AI Agreement.
Much is still yet to come with regards to European Union or United States AI regulations.
Elisabeth Tidwell
Before law school, Elisabeth attended Oklahoma State University where she earned a degree in Biosystems Engineering. Elisabeth is interested in intellectual property law, and is a registered patent agent. She spends her free time taking care of her ten-year old Yorkshire Terrier and traveling with her husband.