Building A.I. Regulatory Frameworks: Two Distinct Approaches and Key Takeaways

Proposed A.I. Framework Hearing

On Tuesday, September 13, 2023, the U.S. Senate Committee on the Judiciary’s Subcommittee on Privacy, Technology, and the Law held a hearing titled Oversight of A.I.: Legislating on Artificial Intelligence. Subcommittee Chair Richard Blumenthal, a Democrat from Connecticut, and Ranking Member Josh Hawley, a Republican from Missouri, led the hearing as a method of introducing their proposed framework for creating “risk-based” regulation on artificial intelligence (“A.I.”). Senators Blumenthal and Hawley invited three distinguished guests—Nvidia chief scientist William Dally, Microsoft president Brad Smith, and Boston University law professor Woodrow Hartzog—to provide testimony and aid the A.I.-focused discussion. Insights from each guest paint a picture of two distinct approaches to A.I. regulation, which suggest a long road ahead for bipartisan debate.

Sensible Balance Approach

In order to comfortably argue for a balanced approach to A.I. regulation, one which caters to both industry innovation and public protection, Nvidia chief scientist William Dally felt compelled to quell the public’s “science fiction”-based fears. First, Dally set the stage: “the A.I. genie is already out of the bottle.” Although the concept of A.I. has been around since the 1950s, the general public did not get a taste of the full power of A.I. until this past year—when ChatGPT was released in November 2022 and drew the attention of “hundreds of millions” of users in early 2023. In light of growing public concern of the power of ChatGPT, a program developed by OpenAI’s strong support from Nvidia, Dally testified that “[a]t its core, A.I. is a software program, not a nuclear reactor.” He reassured to the American public that “humans will always decide how much decision-making power to cede to A.I. models. [They] will never seize power by themselves.” The need for Dally—a top executive for a technology company that is poised to immensely gain from A.I.’s future prominence—to calm Americans’ worries of future catastrophe, instead of trying to understand the public’s perspective, is striking.

Dally, along with Microsoft president Brad Smith, advocated for a balanced approach to A.I. regulation. In Dally’s view, “[Policymakers] can ensure the safe, trustworthy, and ethical deployment of A.I. systems without suppressing innovation by researchers, academics, and enterprises working on new applications today.” Smith similarly described this as a “sensible balance that can both protect the public and advance innovation.” Dally also posited that this approach required “participation of every major power,” particularly foreign governments and innovators. Yet, in Smith’s view, the American A.I. industry must work quickly with the government to capture the innovation of A.I. in a responsible way, as “those countries that succeed in rapidly adopting and using A.I. responsibly are the ones most likely to reap the greatest benefits.” Dally and Smith’s innovation-driven desires likely understate the degree of caution that legal scholars like Woodrow Hartzog are choosing to adopt.

Justification-First Approach

On the other hand, Boston University law professor Woodrow Hartzog highlighted the need to place the burden on industry instead of the American people. In this “justification-first approach,” the proposed new federal agency for A.I. oversight would evaluate the trustworthiness of A.I. developers before they entered the market. This framework, then, could “flip the presumption that burdens society with the risk of dangerous systems by requiring companies to justify their systems by proving they will not harm us.” Yet, in Hartzog’s view, the buzzwords of “innovation” and “progress” as cop-outs to avoid making concrete policy choices may cause Congress to forfeit the potential for bright-line prohibitions on A.I. systems. Thus, Hartzog strongly encouraged lawmakers to rethink “whether particular A.I. systems should exist at all, and under what circumstances it should ever be developed or deployed.”

Hartzog’s legal and ethics-driven perspective, in contrast to the industry and innovation-driven perspectives of Dally and Smith, illuminated key issues with crafting A.I. regulation. First, Hartzog argued that focusing on the individual consumer’s right to control their data would “overwhelm people with choices and delude them about what’s really going on.” As a result, lawmakers may overlook “how power and information are unequally distributed and deployed.” Second, to counter this fundamental flaw of “information asymmetry,” Hartzog advocated for A.I. regulation to be rooted in mandatory duties of loyaltycare, and confidentiality—key pillars of American contract, business, and agency law intended to “mitigate power imbalances in relationships.” These duties would, according to Hartzog, build up A.I. regulatory frameworks around the public’s understanding of anti-betrayal norms.

Yet, it would be more ideal if humans never had to hit the kill switch. If lawmakers get it right this time, the regulatory framework surrounding future A.I. development will eliminate any concerns of societal harm.

Safety Brakes

One of the top insights from the guest testimony was Smith’s discussion of the fundamental need for “safety brakes.” “Safety brakes,” as Smith detailed, would be mechanisms for a full shutdown of A.I. models that control critical infrastructure, such as the electric grid or traffic patterns, to “promote accountability” by ensuring that humans still continue to have the final say. Without this emergency shutdown mechanism, the public’s perception of future catastrophe may “become a new reality.” Yet, it would be more ideal if humans never had to hit the kill switch. If lawmakers get it right this time, the regulatory framework surrounding future A.I. development will eliminate any concerns of future societal harm.

Blumenthal and Hawley’s Potential Approaches

Based on their opening statements alone, Blumenthal and Hawley may align with differing approaches to A.I. regulation, which may lead to bipartisan success in the long run. In Subcommittee Chair Blumethal’s view, “there is a deep appetite [for]… regulation that encourages the best in American free enterprise, but at the same time provides the kind of protections that we do in other areas of our economic activity.” This likely aligns with the sensible balance approach that Dally and Smith advocated for, in which industry leaders and lawmakers work together to ensure that the American people are protected while innovation can still flourish. In Ranking Member Hawley’s view, A.I. development should “work for the American people, [so] that that it’s good for working people, that it’s good for families, [and] that we don’t make the same mistakes that Congress made with social media.” This view likely aligns with the justification-first approach that Hartzog advocated for, in which agency oversight is required to first determine whether new A.I. developments can be trusted by and benefit American people. 

Ultimately, these contrasting approaches to A.I regulation may result in a combined approach, where balance is struck—as always—between American capitalism and protecting the people, but that still provides initial bright-line guardrails for restricting entry into the market. It is unclear how long the bipartisan debate will take to reach a potential combined approach, but the September 13 hearing provided a substantial step forward in putting A.I. regulatory thinking in the forefront of both the public and the industry’s minds.

Ben Brown

Ben graduated from the University of North Carolina at Chapel Hill with a double major in Environmental Studies and Public Policy. At UNC Law, Ben is the Sustainability Coordinator for the Environmental Law Project and a staff member for the North Carolina Journal of Law and Technology. Ben’s legal interests span from corporate and business law to data privacy, energy and environment, and A.I.