Summary
Panel 2: Navigating Regulation and Enforcement in the A.I. Era
at
The George Washington Law Review’s Vol. 92 Symposium:
Legally Disruptive Emerging Technologies
Summary authored by Aristides N. Hadjipanteli.
The second panel of The George Washington Law Review’s Volume 92 Symposium, “Navigating Regulation and Enforcement in the A.I. Era,” was moderated by Professor Alicia Solow-Niederman, The George Washington University Law School. The panel featured Dean Michael Abramowicz, The George Washington University Law School; Professor John F. Duffy, the University of Virginia School of Law; Professor David Engstrom, Stanford Law School; Professor Daniel Ho, Stanford Law School; and Professor Richard Re, the University of Virginia School of Law.
Professor Solow-Niederman began the panel by welcoming all attendees and lightly humored that she had considered using ChatGPT for her remarks but deferred to “good old-fashioned human moderation instead.” Professor Solow-Niederman introduced the four essays from the panelists: (1) “Major Technological Questions” by Dean Abramowicz and Professor Duffy; (2) “The Automated State: A Realist View” by Professor Engstrom; (3) “AI Regulation Has Its Own Alignment Problem: The Technical and Institutional Feasibility of Disclosure, Registration, Licensing, and Auditing” a group effort by Professor Ho and eight co-collaborators; and (4) “Artificial Authorship and Judicial Opinions” by Professor Re. In concluding her remarks, Processor Solow-Niederman noted how each paper asks, in its own way, whether the issues society is witnessing with regards to A.I. are fundamentally new or whether they are magnifying or excavating old issues, and what that determination reveals about possible future interventions. Each paper was allocated 25 minutes and the authors spoke in turn.
Dean Abramowicz and Professor Duffy presented a view that technology is exceptional and worthy of a distinct major technological questions approach to ensure innovations are not curtailed in their infancy. Professor Duffy opened with the motivation for the co-authored paper: the extraordinary velocity of technological change seen in the last 250 years. A quote was displayed which read: “[r]ecent years ‘have seen vastly more changes, and a progress vastly more accelerated, than any that preceded them: they have been years of another world.’” Professor Duffy revealed that though the quote has the appearance of a recent one, it comes from 1867 by Charles Francis Adams, Jr., an early advocate of what is today called “the administrative state” during the steamboat revolution of his time. The quote highlights a persisting question in our era writ large about the phenomenon of how to deal with the intersection of rapidly changing technology and the law.
Professor Duffy presented the paper’s thesis that courts should be skeptical of preexisting sources of law being argued to control major technological questions. The reasoning behind this thesis is that early authorities could not have had good information about new technological questions; therefore, courts should not look to those authorities (statutes and common law decisions). Professor Duffy addressed the counterargument that because the pace of change is so fast and Congress is so slow, the courts should do more adaptation of old statutes to new circumstances. The answers to this counterargument were: (1) our government was designed to be slow and yet it has produced the most technologically innovative economy in the world; (2) Congress can write laws that anticipate technological change; and (3) existing administrative agencies are not likely to have the needed expertise and jurisdictional scope to address new technologies.
Dean Abramowicz summarized the major questions doctrine as courts presuming that Congress could not have imagined “major” issues, such as technological change, when drafting statutes. Dean Abramowicz cited three factors considered by courts applying the doctrine: (1) the sheer magnitude of the regulation (economic and political); (2) agencies making decisions beyond their expertise or outside the scope of their jurisdiction; and (3) the text of a statute indicates the scope of delegation. Dean Abramowicz admitted to an ideological split on the major questions doctrine; conservatives tend to view the doctrine as a useful corrective whilst liberals view it as aggressive overreaching. However, the Dean noted that in the common law context, the ideological implications are not as obvious. A case discussion followed, illustrating the three themes of the major questions doctrine as applied to other societal issues, including: Biden v. Nebraska (student debt), West Virginia v. EPA (clean power plan), and Alabama Ass’n of Realtors v. HHS (evictions and COVID-19).
Professor Duffy presented the historical examples of photography and airplane overflights as illustrations of how the major questions doctrine might have applied in those contexts. Dean Abramowicz then offered the modern example of cryptocurrencies and whether they may be defined as “investment contracts.” Dean Abramowicz highlighted how a hyper-legalistic approach, such as applying the case law of SEC v. Howey (S. Ct. 1946), would be inappropriate. Consequently, the Dean advocated for the major questions approach since (1) there is ambiguity in this instance and (2) the technological novelty should be a factor against construing a statute to address cryptocurrencies, leaving it to Congress, who is better situated, to pass legislation providing clarity.
Professor Engstrom’s presentation focused on the current challenges with public sector A.I. use, arguing that a realist view demands working within the modern administrative state as it currently stands. The presentation opened with horror stories about the government’s use of A.I. One such story was that of Robert Williams, an African-American man in Detroit who was wrongly identified by a facial recognition technology system and jailed for 30 hours before police realized the mistake. Though Williams’ case occurred in 2020, Professor Engstrom noted that there have been a parade of stories just like it ever since, fueling a strong and mounting critique among academics and activists about government use of A.I.
Professor Engstrom then walked through a catalog of arguments which have made their way into academic literature and public debate and explained how they have resulted in calls for more regulation of the government when it uses A.I. and other forms of automation, including absolute prohibitions on particular A.I. uses., requirements that humans always be put in the loops, and FDA-ish licensing schemes being required before a government agency can deploy a tool. Professor Engstrom’s presentation emphasized three claims: (1) that automation’s ambiguities will result in A.I. use being (mostly) litigated and not legislated; (2) that (1), if true, might actually be a good thing as existing laws, rules, and doctrines (especially in administrative law) have struggled for decades with the trade-offs posed by government use of A.I. and can be adapted to address current issues arising with A.I.; and (3) there will be a lot of work to do if existing administrative law is to be adapted to regulate A.I. The presentation concluded with a recommendation for a Weberian approach, paying close attention to how bureaucracies actually use A.I. and observing the intersection of (1) law; (2) social science; (3) organizational theory; and (4) computer science. Professor Engstrom underscored that wise regulation in this area will require bringing all these different bodies of knowledge to bear in the years to come in order to have the most impact.
Professor Ho began his presentation with gratitude and humor, first thanking his co-collaborators on the paper and then displaying two pictures: (1) a headshot of Dean Abramowicz and (2) an image of Star Wars characters. Professor Ho explained how A.I. could generate a hybrid image combining the two images, and then proceeded to share his A.I. generated “masterpiece” with the audience. An image depicting the Star Wars characters morphed with Dean Abramowicz’s face elicited several laughs from the audience. On a more serious note, Professor Ho explained how the new, incredible, and powerful technology that is A.I. has led to all sorts of headlines that have gripped Capitol Hill since the beginning of this year. Professor Ho’s presentation highlighted the three parts of his co-authored paper. First, a walkthrough of the different posited harms of A.I. Second, an explanation of A.I.’s regulatory alignment problem. And, third, a discussion centering on the four most conventional proposals for A.I. regulation: disclosure, licensing, registration, and audit.
Professor Ho discussed illustrative examples, examining instances where A.I. has erred but also when it has proved itself to be an effective tool. One example of A.I. erring was Amazon having to scrap a resume scanning tool that it had built because the tool systematically discounted the names of women colleges, thus leading to gender bias. Conversely, A.I. proved to be a useful tool for the Internal Revenue Service in uncovering disparities in existing legacy systems, finding that black taxpayers were audited at about 3 to 5 times the rate as non-black taxpayers. Professor Ho then further articulated what he and his co-authors have called the “A.I. regulatory alignment problem,” which essentially refers to problems in the alignment of A.I. systems with human values and regulatory objectives. In concluding, Professor’s Ho dissected the four conventional proposals for A.I. regulation, disclosure, licensing, registration, and audit, and presented some concerns about the technical and institutional feasibility of each.
The final speaker, Professor Re, engaged in a thought experiment using judges and authorship by A.I. as an opportunity to think about the very nature of judicial opinions and the fundamental nature of legal writing in the legal system itself. Professor Re opened with a current example from the news about a U.K. appellate judge who had admitted to using A.I. in writing a part of an opinion, finding the tool to be “jolly useful.” Professor Re’s presentation focused on two main claims: (1) reason’s demise; and (2) the judiciary’s peril. In discussing the first claim, Professor Re emphasized a concern that if a large amount of A.I. generated “reason” and rhetoric becomes available, for so cheaply, humans will increasingly defer to its use. This would raise a broader question as to whether A.I. in legal writing may lead to “the demise of legal reasoning as we know it.” Professor Re’s second claim focused on how A.I. use in legal writing may be to the judiciary’s peril, addressing potential dangers, including: a demystification of legal decision-making, a straining of the judiciary’s prestige, and increased efforts by political actors to get involved in the judicial decision-making process. Professor Re hypothesized that there will be some effort by the judiciary to retain human control of the judicial system, partly due to self-interested reasons, because humans need to be involved in the process of justice.
Following each of the Panelist’s presentations, Professor Solow-Niederman posed a question to Professor Re as to whether litigants and friends of the court would similarly be barred from using A.I. tools in light of his suggestion that judges might intentionally choose to not rely on the technology. Professor Re responded that resistance to A.I. tools would likely be strongest from behind the bench but that he could imagine arguments not only for a judicial ethic against A.I. use but also a lawyerly ethic against A.I. use. Professor Re further suggested that one way to avoid such a simplistic bar on A.I. tools is if they are employed as “counter tools,” or a means to strip rhetoric and legal biases out of legal arguments in an adversarial manner.
Professor Solow-Niederman then allowed questions from the audience. The first question sought insight into where a line might be drawn in regulating government use of A.I. based on its level of sophistication, and what the role of courts and politicians might look like in this area. Professor Ho fielded this question and explained that public sector technology is at least a generation behind the private sector, and therefore, the first structural interventions relating to government use of A.I. ought to be getting technologists and experts into the government in order to provide meaningful oversight and systemic engagement with systems as they are being built out. Professor Ho offered a statistic from a recent A.I. Index survey which found that only ~2% of experts with PhDs considered a career in the public sector, with ~60% considering one in industry.
Professor Ho also reacted to the question posed by Professor Solow-Niederman to Professor Re, explaining that while it is interesting to think about the long-term consequences of A.I. in judicial decision-making, the empirical reality is that A.I. is still in its infancy. Professor Ho highlighted that A.I. models that have been benchmarked did not perform well on a wide range of legal tasks, often hallucinating facts which did not exist. Consequently, Professor Ho questioned whether concerns about A.I. use in judicial opinions is premature or speculative at present.
In concluding the panel session, one audience member asked Professor Engstrom to further discuss what he believes a future human organization will look like using A.I. tools from both an agency perspective and a private sector perspective. Professor Engstrom candidly replied that he tends to avoid such “event horizon” or distant future hypotheticals, preferring to confine his analyses to the near or middle term future. Professor Engstrom explained that this is the best use of our focus in terms of what society needs to be doing, above all else, “because what we do now will have path dependent implications for what happens later.”