Home > FT > AI Regulation Has Its Own Alignment Problem: The Technical and Institutional Feasibility of Disclosure, Registration, Licensing, and Auditing

AI Regulation Has Its Own Alignment Problem: The Technical and Institutional Feasibility of Disclosure, Registration, Licensing, and Auditing

Neel Guha, Christie M. Lawrence, Lindsey A. Gailmard, Kit T. Rodolfa, Faiz Surani, Rishi Bommasani, Inioluwa Deborah Raji, Mariano-Florentino Cuéllar, Colleen Honigsberg, Percy Liang & Daniel E. Ho
92 Geo. Wash. L. Rev. 1473

Calls for regulating artificial intelligence (“AI”) are widespread, but there remains little consensus on both the specific harms that regulation can and should address and the appropriate regulatory actions to take. Computer scientists propose technical solutions that may be infeasible or illegal; lawyers propose regulation that may be technically impossible; and commentators propose policies that may backfire. AI regulation, in that sense, has its own alignment problem, in which proposed interventions are often misaligned with societal values. This Article assesses the alignment and technical and institutional feasibility of four dominant proposals for AI regulation in the United States: disclosure, registration, licensing, and auditing. The caution against the rush to heavily regulate AI without addressing regulatory alignment is underpinned by three arguments. First, AI regulatory proposals tend to suffer from both regulatory mismatch (vertical misalignment) and value conflict (horizontal misalignment). Clarity about a proposal’s objectives, feasibility, and impact may reveal that it is poorly matched with the harm intended to be addressed. In some instances, the impulse for AI regulation may, in fact, be better addressed by non-AI regulatory reform. And the more concrete a proposed regulation is, the more it will expose tensions and tradeoffs between different regulatory objectives and values. Proposals that purportedly address all that ails AI (safety, trustworthiness, bias, accuracy, and privacy) at once ignore the reality that many goals cannot be jointly satisfied. Second, the dominant AI regulatory proposals face common technical and institutional feasibility challenges—who in government should coordinate and enforce regulation, how can the scope of regulatory interventions avoid ballooning, and what standards should operationalize trustworthy AI values given the lack of technical consensus? Third, the federal government can, to varying degrees, reduce regulatory misalignment by designing interventions to account for feasibility and alignment considerations. This Article thus closes with concrete recommendations to minimize misalignment in AI regulation.

Read the Full Article Here.