Keywords
Procurement; artificial intelligence; management-based regulation; performance-based regulation; regulation; documentation; government contracts
Abstract
Artificial intelligence (AI) is a black box technology in a black box industry. Some view AI as a life-changing technology capable of advancing society and perhaps even saving the world, while others fear its capacity to harm. Like Dr. Frankenstein, developers fear the unpredictability of their own creations; deployers fear the unknown risks of third-party AI tools that market pressures drive them to assume; members of civil society fear AI’s capacity to oppress the already oppressed and degrade trust in institutions; and everyday users fear the undisclosed potential of AI to cause harm by means they cannot readily comprehend. These stakeholders all harbor fears of the unknown, but not all fears are created equal. Some benefit from an informational advantage: AI’s commercial purveyors better understand the risks of their own creation. Without a market incentive to compel transparency, let alone caution, the market reinforces an AI information imbalance that allows better-informed developers to exploit their superior knowledge of the technology at the expense of less-informed deployers and users.
Economic theory has a term for harm that emanates from the unknown in markets—information asymmetry—and considers such harm a market failure because it undermines a consumer’s ability to protect themself from risky products or services. Government intervention can remedy this market failure by neutralizing the risks themselves through performance-based or management-based regulations, but not without adequate information about the relevant market. Unfortunately, lawmakers and regulators face AI informational problems of their own. Performance-based regulation designed without a clear understanding of the underlying technology risks imposes unsubstantiated, even arbitrary, performance requirements on AI products. On the other hand, uninformed management-based regulation risks mandating untested, unsupported process-based practices as part of the AI development life cycle. Both would be ineffective at best and harmful at worst. While scholars are actively debating the merits of performance- versus management-based regulation for AI, they are ignoring the threshold problem: a lack of actionable information undercuts both approaches. This Note seeks to fill that gap. It argues that federal procurement offers a promising solution: contracting for the missing information required to promulgate truly effective AI policies under either approach.
Rather than endorsing one approach over the other, this Note argues that federal procurement requirements should focus on information acquisition to rectify the prevailing information asymmetry between AI developers and the federal government so that the government can craft effective regulations that prevent AI developers from exploiting information asymmetries with the public. The procurement process is particularly well suited for AI information acquisition for three main reasons: (1) the contracting process can establish adaptive, information-forcing documentation requirements; (2) the federal government’s expansive investment in AI technologies enables the development of a robust repository of use-case-specific information and feedback; and (3) procurement’s competitive, public bid process fosters a symbiotic marketplace for documentation requirements, where the government must internalize higher costs for contractors’ compliance and, conversely, contractors must minimize those added compliance costs to tender competitive bids. Lawmakers require greater AI market insight to craft effective regulation; why not leverage the power of the purse to procure it?
Recommended Citation
Sam Adler,
AI Procurement as Regulatory Reconnaissance,
94 Fordham L. Rev. 967
(2025).
Available at: https://ir.lawnet.fordham.edu/flr/vol94/iss3/3
Included in
Government Contracts Commons, Internet Law Commons, Law and Economics Commons, Legislation Commons, President/Executive Department Commons, Science and Technology Law Commons