Employment Discrimination, Algorithmic Fairness, Privacy, Corporate Independence, Algorithmic Discrimination, Enforcement
Workplace antidiscrimination laws must adapt to address today’s technological realities. If left underregulated, the rapidly expanding role of Artificial Intelligence (“AI”) in hiring practices has the danger of creating new, more obscure modes of discrimination. Companies use these tools to reduce the duration and costs of hiring and potentially attract a larger pool of qualified applicants for their open positions. But how can we guarantee that these hiring tools yield fair outcomes when deployed? These issues are just starting to be addressed at the federal, state, and city levels. This Note tackles whether a new city law can be improved to be a crucial stepping stone for federal and local governments to strengthen their regulatory apparatus to address AI in employment.
This Note discusses the issues that algorithmic employment practices raise regarding discrimination, privacy, and corporate independence in employment decisions. After reviewing these issues, this Note analyzes New York City’s recently passed Local Law Int. No. 1894-A and proposes changes for effective implementation. The analysis finds significant gaps in the statutory language that threaten to undermine the legislative goals. This Note analyzes the bill’s text and legislative history to suggest changes to the bill’s delegated rulemaking authority and offers solutions to fill the significant gaps in the law’s text. Practical regulatory guidance for improving hiring algorithms ensures that algorithms are applied to counteract rather than reproduce bias in the workplace.
Lindsey Fuchs, Note, Hired by a Machine: Can a New York City Law Enforce Algorithmic Fairness in Hiring Practices?, 28 Fordham J. Corp. & Fin. L. 185 (2023).