{"id":9164,"date":"2023-09-20T05:36:00","date_gmt":"2023-09-20T05:36:00","guid":{"rendered":"https:\/\/ncjolt.org\/?p=9164"},"modified":"2024-02-14T00:58:58","modified_gmt":"2024-02-14T00:58:58","slug":"building-a-i-regulatory-frameworks-two-distinct-approaches-and-key-takeaways","status":"publish","type":"post","link":"https:\/\/journals.law.unc.edu\/ncjolt\/blogs\/building-a-i-regulatory-frameworks-two-distinct-approaches-and-key-takeaways\/","title":{"rendered":"Building A.I. Regulatory Frameworks: Two Distinct Approaches and Key Takeaways"},"content":{"rendered":"\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" width=\"1024\" height=\"683\" src=\"https:\/\/journals.law.unc.edu\\\/ncjolt\/wp-content\/uploads\/sites\/4\/2024\/02\/iStock-1184959589-1024x683.jpg\" alt=\"\" class=\"wp-image-9165\" srcset=\"https:\/\/journals.law.unc.edu\/ncjolt\/wp-content\/uploads\/sites\/4\/2024\/02\/iStock-1184959589-1024x683.jpg 1024w, https:\/\/journals.law.unc.edu\/ncjolt\/wp-content\/uploads\/sites\/4\/2024\/02\/iStock-1184959589-300x200.jpg 300w, https:\/\/journals.law.unc.edu\/ncjolt\/wp-content\/uploads\/sites\/4\/2024\/02\/iStock-1184959589-1536x1025.jpg 1536w, https:\/\/journals.law.unc.edu\/ncjolt\/wp-content\/uploads\/sites\/4\/2024\/02\/iStock-1184959589-2048x1367.jpg 2048w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p><strong>Proposed A.I. Framework Hearing<\/strong><\/p>\n\n\n\n<p>On Tuesday, September 13, 2023, the U.S. Senate Committee on the Judiciary\u2019s Subcommittee on Privacy, Technology, and the Law held a&nbsp;<a href=\"https:\/\/www.judiciary.senate.gov\/committee-activity\/hearings\/oversight-of-ai-legislating-on-artificial-intelligence\">hearing<\/a>&nbsp;titled&nbsp;<em>Oversight of A.I.: Legislating on Artificial Intelligence<\/em>. Subcommittee Chair Richard Blumenthal, a Democrat from Connecticut, and Ranking Member Josh Hawley, a Republican from Missouri, led the hearing as a method of introducing their&nbsp;<a href=\"https:\/\/www.blumenthal.senate.gov\/imo\/media\/doc\/09072023bipartisanaiframework.pdf\">proposed framework<\/a>&nbsp;for creating \u201crisk-based\u201d regulation on artificial intelligence (\u201cA.I.\u201d). Senators Blumenthal and Hawley invited three distinguished guests\u2014Nvidia chief scientist William Dally, Microsoft president Brad Smith, and Boston University law professor Woodrow Hartzog\u2014to provide testimony and aid the A.I.-focused discussion. Insights from each guest paint a picture of two distinct approaches to A.I. regulation, which suggest a long road ahead for bipartisan debate.<\/p>\n\n\n\n<p><strong>Sensible Balance Approach<\/strong><\/p>\n\n\n\n<p>In order to comfortably argue for a balanced approach to A.I. regulation, one which caters to both industry innovation and public protection, Nvidia chief scientist William Dally felt compelled to quell the public\u2019s \u201cscience fiction\u201d-based fears. First, Dally&nbsp;<a href=\"https:\/\/www.judiciary.senate.gov\/imo\/media\/doc\/2023-09-12_pm_-_testimony_-_dally.pdf\">set the stage<\/a>: \u201cthe A.I. genie is already out of the bottle.\u201d Although the concept of A.I. has been around&nbsp;<a href=\"https:\/\/sitn.hms.harvard.edu\/flash\/2017\/history-artificial-intelligence\/\">since the 1950s<\/a>, the general public did not get a taste of the full power of A.I. until this past year\u2014when&nbsp;<a href=\"https:\/\/www.businessinsider.com\/everything-you-need-to-know-about-chat-gpt-2023-1\">ChatGPT<\/a>&nbsp;was released in November 2022 and drew the attention of \u201chundreds of millions\u201d of users in early 2023. In light of&nbsp;<a href=\"https:\/\/www.pewresearch.org\/short-reads\/2023\/08\/28\/growing-public-concern-about-the-role-of-artificial-intelligence-in-daily-life\/\">growing public concern<\/a>&nbsp;of the power of ChatGPT, a program developed by OpenAI\u2019s&nbsp;<a href=\"https:\/\/www.gizmochina.com\/2023\/07\/25\/openai-nvidia-gpu-new-ai\/#:~:text=NVIDIA%20has%20supplied%20around%2020%2C000,a%20more%20advanced%20AI%20model.\">strong support from Nvidia<\/a>, Dally testified that \u201c[a]t its core, A.I. is a software program, not a nuclear reactor.\u201d He reassured to the American public that \u201chumans will always decide how much decision-making power to cede to A.I. models. [They] will never seize power by themselves.\u201d The need for Dally\u2014a top executive for a technology company that is poised to immensely gain from A.I.\u2019s future prominence\u2014to calm Americans\u2019 worries of future catastrophe, instead of trying to understand the public\u2019s perspective, is striking.<\/p>\n\n\n\n<p>Dally, along with Microsoft president Brad Smith, advocated for a balanced approach to A.I. regulation. In Dally\u2019s view, \u201c[Policymakers] can ensure the safe, trustworthy, and ethical deployment of A.I. systems without suppressing innovation by researchers, academics, and enterprises working on new applications today.\u201d Smith&nbsp;<a href=\"https:\/\/www.judiciary.senate.gov\/imo\/media\/doc\/2023-09-12_pm_-_testimony_-_smith.pdf\">similarly described this<\/a>&nbsp;as a \u201csensible balance that can both protect the public and advance innovation.\u201d Dally also posited that this approach required \u201cparticipation of every major power,\u201d particularly foreign governments and innovators. Yet, in Smith\u2019s view, the American A.I. industry must work quickly with the government to capture the innovation of A.I. in a responsible way, as \u201cthose countries that succeed in rapidly adopting and using A.I. responsibly are the ones most likely to reap the greatest benefits.\u201d Dally and Smith\u2019s innovation-driven desires likely understate the degree of caution that legal scholars like Woodrow Hartzog are choosing to adopt.<\/p>\n\n\n\n<p><strong>Justification-First Approach<\/strong><\/p>\n\n\n\n<p>On the other hand, Boston University law professor&nbsp;<a href=\"https:\/\/www.judiciary.senate.gov\/imo\/media\/doc\/2023-09-12_pm_-_testimony_-_hartzog.pdf\">Woodrow Hartzog highlighted<\/a>&nbsp;the need to place the burden on industry instead of the American people. In this \u201cjustification-first approach,\u201d the proposed new federal agency for A.I. oversight would evaluate the trustworthiness of A.I. developers before they entered the market. This framework, then, could&nbsp;\u201cflip the presumption that burdens society with the risk of dangerous systems by requiring companies to justify their systems by proving they will not harm us.\u201d Yet, in Hartzog\u2019s view, the buzzwords of \u201cinnovation\u201d and \u201cprogress\u201d as cop-outs to avoid making concrete policy choices may cause Congress to forfeit the potential for bright-line prohibitions on A.I. systems. Thus, Hartzog strongly encouraged lawmakers to rethink \u201cwhether particular A.I. systems should exist at all, and under what circumstances it should ever be developed or deployed.\u201d<\/p>\n\n\n\n<p>Hartzog\u2019s legal and ethics-driven perspective, in contrast to the industry and innovation-driven perspectives of Dally and Smith, illuminated key issues with crafting A.I. regulation. First,&nbsp;<a href=\"https:\/\/www.judiciary.senate.gov\/imo\/media\/doc\/2023-09-12_pm_-_testimony_-_hartzog.pdf\">Hartzog argued<\/a>&nbsp;that focusing on the individual consumer\u2019s right to control their data would \u201coverwhelm people with choices and delude them about what\u2019s really going on.\u201d As a result, lawmakers may overlook \u201chow power and information are unequally distributed and deployed.\u201d Second, to counter this fundamental flaw of \u201cinformation asymmetry,\u201d Hartzog advocated for A.I. regulation to be rooted in mandatory duties of&nbsp;<a href=\"https:\/\/www.law.cornell.edu\/wex\/duty_of_loyalty\">loyalty<\/a>,&nbsp;<a href=\"https:\/\/www.law.cornell.edu\/wex\/duty_of_care\">care<\/a>, and&nbsp;<a href=\"https:\/\/www.law.cornell.edu\/wex\/attorneys_duty_of_confidentiality#:~:text=Definition,legal%20demands%20for%20client%20information.\">confidentiality<\/a>\u2014key pillars of American contract, business, and agency law intended to \u201cmitigate power imbalances in relationships.\u201d These duties would, according to Hartzog, build up A.I. regulatory frameworks around the public\u2019s understanding of anti-betrayal norms.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\"><p>Yet, it would be more ideal if humans never had to hit the kill switch. If lawmakers get it right this time, the regulatory framework surrounding future A.I. development will eliminate any concerns of societal harm.<\/p><\/blockquote>\n\n\n\n<p><strong>Safety Brakes<\/strong><\/p>\n\n\n\n<p>One of the top insights from the guest testimony was Smith\u2019s discussion of the fundamental need for \u201csafety brakes.\u201d \u201cSafety brakes,\u201d&nbsp;<a href=\"https:\/\/www.judiciary.senate.gov\/imo\/media\/doc\/2023-09-12_pm_-_testimony_-_smith.pdf\">as Smith detailed<\/a>, would be mechanisms for a full shutdown of A.I. models that control critical infrastructure, such as the electric grid or traffic patterns, to \u201cpromote accountability\u201d by ensuring that humans still continue to have the final say. Without this emergency shutdown mechanism, the public\u2019s perception of future catastrophe may \u201cbecome a new reality.\u201d Yet, it would be more ideal if humans never had to hit the kill switch. If lawmakers get it right this time, the regulatory framework surrounding future A.I. development will eliminate any concerns of future societal harm.<\/p>\n\n\n\n<p><strong>Blumenthal and Hawley\u2019s Potential Approaches<\/strong><\/p>\n\n\n\n<p>Based on their opening statements alone, Blumenthal and Hawley may align with differing approaches to A.I. regulation, which may lead to bipartisan success in the long run. In&nbsp;<a href=\"https:\/\/www.judiciary.senate.gov\/committee-activity\/hearings\/oversight-of-ai-legislating-on-artificial-intelligence\">Subcommittee Chair Blumethal\u2019s view<\/a>, \u201cthere is a deep appetite [for]\u2026 regulation that encourages the best in American free enterprise, but at the same time provides the kind of protections that we do in other areas of our economic activity.\u201d This likely aligns with the sensible balance approach that Dally and Smith advocated for, in which industry leaders and lawmakers work together to ensure that the American people are protected while innovation can still flourish. In&nbsp;<a href=\"https:\/\/www.judiciary.senate.gov\/committee-activity\/hearings\/oversight-of-ai-legislating-on-artificial-intelligence\">Ranking Member Hawley\u2019s view<\/a>, A.I. development should \u201cwork for the American people, [so] that that it\u2019s good for working people, that it\u2019s good for families, [and] that we don\u2019t make the same mistakes that Congress made with social media.\u201d This view likely aligns with the justification-first approach that Hartzog advocated for, in which agency oversight is required to first determine whether new A.I. developments can be trusted by and benefit American people.&nbsp;<\/p>\n\n\n\n<p>Ultimately, these contrasting approaches to A.I regulation may result in a combined approach, where balance is struck\u2014as always\u2014between American capitalism and protecting the people, but that still provides initial bright-line guardrails for restricting entry into the market. It is unclear how long the bipartisan debate will take to reach a potential combined approach, but the September 13 hearing provided a substantial step forward in putting A.I. regulatory thinking in the forefront of both the public and the industry\u2019s minds.<\/p>\n\n\n\n<p><strong>Ben Brown<\/strong><\/p>\n\n\n\n<p>Ben graduated from the University of North Carolina at Chapel Hill with a double major in Environmental Studies and Public Policy. At UNC Law, Ben is the Sustainability Coordinator for the Environmental Law Project and a staff member for the North Carolina Journal of Law and Technology. Ben\u2019s legal interests span from corporate and business law to data privacy, energy and environment, and A.I.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Proposed A.I. Framework Hearing On Tuesday, September 13, 2023, the U.S. Senate Committee on the Judiciary\u2019s Subcommittee on Privacy, Technology, and the Law held a&nbsp;hearing&nbsp;titled&nbsp;Oversight of A.I.: Legislating on Artificial Intelligence. Subcommittee Chair Richard Blumenthal, a Democrat from Connecticut, and Ranking Member Josh Hawley, a Republican from Missouri, led the hearing as a method of <a href=\"https:\/\/journals.law.unc.edu\/ncjolt\/blogs\/building-a-i-regulatory-frameworks-two-distinct-approaches-and-key-takeaways\/\" class=\"more-link\">&#8230;<\/a><\/p>\n","protected":false},"author":4,"featured_media":9165,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[51],"tags":[297,451,187,163,540],"_links":{"self":[{"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/posts\/9164"}],"collection":[{"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/comments?post=9164"}],"version-history":[{"count":5,"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/posts\/9164\/revisions"}],"predecessor-version":[{"id":9171,"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/posts\/9164\/revisions\/9171"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/media\/9165"}],"wp:attachment":[{"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/media?parent=9164"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/categories?post=9164"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/journals.law.unc.edu\/ncjolt\/wp-json\/wp\/v2\/tags?post=9164"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}