Liability for AI Agents
Artificial intelligence (“AI”) is becoming integral to modern life, fueling innovation while presenting complex legal challenges. Unlike traditional software, AI operates with a degree of autonomy, producing outcomes that its developers or deployers cannot fully anticipate. Advances in underlying technology have further enhanced this autonomy, giving rise to AI agents: systems capable of interacting with their environment independently, often with minimal or no human oversight. As AI decision–making—like that of humans—is inherently imperfect, its increasing deployment inevitably results in instances of harm, prompting the critical question of whether developers and deployers should be held liable as a matter of tort law.
This question is frequently answered in the negative. Many scholars, adopting a framework of technological exceptionalism, assume AI to be uniquely disruptive. Citing the lack of transparency and unpredictability of AI models, they contend that AI challenges conventional notions of causality, rendering existing liability regimes inadequate.
This Article offers the first comprehensive normative analysis of the liability challenges posed by AI agents through a law-and-economics lens. It begins by outlining an optimal AI liability framework designed to maximize economic and societal benefits. Contrary to prevailing assumptions about AI’s disruptiveness, this analysis reveals that AI largely aligns with traditional
products. While AI presents some distinct challenges—particularly in its complexity, opacity, and potential for benefit externalization—these factors call for targeted refinements to existing legal frameworks rather than an entirely new paradigm.
This holistic approach underscores the resilience of traditional legal principles in tort law. While AI undoubtedly introduces novel complexities, history shows that tort law has effectively navigated similar challenges before. For example, AI’s causality issues closely resemble those in medical malpractice cases, where the impact of treatment on patient recovery can be uncertain. The legal system has already addressed these issues, providing a clear precedent for extending similar solutions to AI. Likewise, while the traditional distinction between design and manufacturing defects does not map neatly onto AI, there is a compelling case for classifying inadequate AI training data as a manufacturing defect—aligning AI liability with established legal doctrine.
Taken together, this Article argues that AI agents do not necessitate a fundamental overhaul of tort law but rather call for targeted, nuanced refinements. This analysis offers essential guidance on how to effectively apply existing legal standards to this evolving technology.
PDF: https://journals.law.unc.edu/ncjolt/wp-content/uploads/sites/4/2025/04/Herbosch-Liability-for-AI-Agents.pdf
Author: Maarten Herbosch
Volume 26, Issue 3