On February 14, 2024, Emily Black, assistant professor of computer science, received the Privacy Papers for Policymakers Award from the Future of Privacy Forum for her work titled “Less Discriminatory Algorithms,” which is forthcoming in the Georgetown Law Journal (Vol. 113, No 1, 2024).
Black’s article leverages results from computer science around the topic of model multiplicity to argue for a more powerful interpretation of the disparate impact doctrine, a key component of US Civil Rights law, in the case of AI decision-making systems. Specifically, Black and her co-authors argue how the phenomenon of model multiplicity--- that there are almost always multiple possible models with equivalent performance for a given prediction problem, and these can exhibit different behaviors along other axes, such as selection rates across demographic groups---can lead to a proactive duty for companies which use algorithmic systems in traditional civil rights domains (e.g. housing, employment, and credit) to search for and implement less discriminatory algorithms (LDAs). This is in contrast to the current interpretation of disparate impact, which puts the duty to search for LDAs on third parties which bring cases of discrimination against these companies, such as civil society organizations. However, such third parties rarely have the resources (e.g. data access, AI expertise) to find less discriminatory algorithms---which may help explain why there have been extremely few cases alleging disparate impact in AI systems, and even fewer LDAs found by outside organizations. Thus, this proposal may help make the disparate impact doctrine a more effective tool for curtailing discrimination in AI systems.
The Privacy Papers for Policymakers Award aims to highlight important work that analyzes current and emerging privacy issues and proposes achievable short-term solutions or new means of analysis that could lead to real-world policy solutions. Black traveled to Washington, D.C., on February 27 for the 14th annual award ceremony.