"Artificial Intelligence and the Discrimination Injury," by Andrew D., (March 14, 2025). UCLA School of Law, Public Law Research Paper No. 25-10, 78 Florida Law Review __ (forthcoming 2026), Available at SSRN: https://ssrn.com/abstract=5179224 or http://dx.doi.org/10.2139/ssrn.5179224.
For a decade, scholars have debated whether discrimination involving artificial intelligence (AI) can be captured by existing discrimination laws. This article argues that the challenge that artificial intelligence poses for discrimination law stems not from the specifics of any statute, but from the very conceptual framework of discrimination law. Discrimination today is a species of tort, concerned with rectifying individual injuries, rather than a law aimed at broadly improving social or economic equality. As a result, the doctrine centers blameworthiness and individualized notions of injury. But it is also a strange sort of tort that does not clearly define its injury. Defining the discrimination harm is difficult and contested. As a result, the doctrine skips over the injury question and treats a discrimination claim as a process question about whether a defendant acted properly in a single decision-making event. This tort-with-unclear-injury formulation effectively merges the questions of injury and liability: If a defendant did not act improperly, then no liability attaches because a discrimination event did not occur. Injury is tied to the single decision event and there is no room for recognizing discrimination injury without liability. AI works by using algorithms (i.e., instructions for computers) to process and identify patterns in large amounts of data (“training”), and then use those patterns to make predictions or decisions when given new information.
Researchers and technologists have repeatedly demonstrated that algorithmic systems can produce discriminatory outputs. Sometimes, this is a result of training on unrepresentative data. In other cases, an algorithm will find and replicate hidden patterns of human discrimination it finds in the training data. A 2021 analysis by The Markup of mortgage lenders who used underwriting algorithms found that the lenders were far more likely to reject applicants of color than white applicants: 40% more likely for Latino or Hispanic Americans, 50% more likely for Asian Americans and Pacific Islanders, 70% more likely for Native Americans, and 80% more likely for Black Americans.
This formulation directly affects regulation of AI discrimination for two reasons: First, AI decision-making is distributed; it is a combination of software development, its configuration, and its application, all of which are completed at different times and usually by different parties. This means that the mental model of a single decision and decisionmaker breaks down in this context. Second, the process-based injury is fundamentally at odds with the existence of “discriminatory” technology as a concept. While we can easily conceive of discriminatory AI as a colloquial matter, if there is legally no discrimination event until the technology is used in an improper way, then the technology cannot be considered discriminatory until it is improperly used.
The analysis leads to two ultimate conclusions. First, while the applicability of disparate impact law to AI is unknown, as no court has addressed the question head-on, liability will depend in large part on the degree to which a court is willing to hold a decisionmaker (e.g. and employer, lender, or landlord) liable for using a discriminatory technology without adequate attention to the effects, for a failure to either comparison shop or fix the AI. Given the shape of the doctrine, the fact that the typical decisionmaker is not tech savvy, and that they likely purchased the technology on the promise of it being non- discriminatory, whether a court would find such liability is an open question. Second, discrimination law cannot be used to create incentives or penalties for the people best able to address the problem of discriminatory AI - the developers themselves. This Article therefore argues for supplementing discrimination law with the application of a combination of consumer protection, product safety, and products liability - all legal doctrines meant to address the distribution of harmful products on the open market, and all better suited to directly addressing the products that create discriminatory harms.
Read the September 13, 2024 Brookings article.
Read the August 25, 2021 The Markup article.
(Image by storyset on Freepik.com.)