Ryan Abbott
86 Geo. Wash. L. Rev. 1
Artificial intelligence is part of our daily lives. Whether working as chauffeurs, accountants, or police, computers are taking over a growing number of tasks once performed by people. As this occurs, computers will also cause the injuries inevitably associated with these activities. Accidents happen, and now computer-generated accidents happen. The recent fatality involving Tesla’s autonomous driving software is just one example in a long series of “computergenerated torts.”
Yet hysteria over such injuries is misplaced. In fact, machines are, or at least have the potential to be, substantially safer than people. Self-driving cars will cause accidents, but they will cause fewer accidents than human drivers. Because automation will result in substantial safety benefits, tort law should encourage its adoption as a means of accident prevention. Under current legal frameworks, suppliers of computer tortfeasors are likely strictly responsible for their harms. This Article argues that where a supplier can show that an autonomous computer, robot, or machine is safer than a reasonable person, the supplier should be liable in negligence rather than strict liability. The negligence test would focus on the computer’s act instead of its design, and in a sense, it would treat a computer tortfeasor as a person rather than a product. Negligence-based liability would incentivize automation when doing so would reduce accidents, and it would continue to reward suppliers for improving safety.
More importantly, principles of harm avoidance suggest that once computers become safer than people, human tortfeasors should no longer be measured against the standard of the hypothetical reasonable person that has been employed for hundreds of years. Rather, individuals should be judged against computers. To appropriate the immortal words of Justice Holmes, we are all “hasty and awkward” compared to the reasonable computer.