Keywords
A.I.
Abstract
A number of articles have been written in the last couple of years about the evidentiary challenges posed by “deepfakes”—inauthentic videos and audios generated by artificial intelligence (AI) in such a way as to appear to be genuine. You are probably aware of some of the widely distributed examples, such as: (1) Pope Francis wearing a Balenciaga jacket; (2) Jordan Peele’s video showing President Barack Obama speaking and saying things that President Obama never said; (3) Nancy Pelosi speaking while appearing to be intoxicated; and (4) Robert DeNiro’s de-aging in The Irishman.
The evidentiary risk posed by deepfakes is that a court might find a deepfake video to be authentic under the mild standards of Rule 901 of the Federal Rules of Evidence, that a jury may then think that the video is authentic because of the difficulty of uncovering deepfakes, and that all this will lead to an inaccurate result at trial. The question for the Advisory Committee on Evidence Rules (the “Committee”) is whether Rule 901 in its current form is sufficient to guard against the risk of admitting deepfakes (with the understanding that no rule can guarantee perfection) or whether the rules should be amended to provide additional and more stringent authenticity standards to apply to deepfakes.
Recommended Citation
Daniel J. Capra,
Deepfakes Reach the Advisory Committee on Evidence Rules,
92 Fordham L. Rev. 2491
(2024).
Available at: https://ir.lawnet.fordham.edu/flr/vol92/iss6/7
Included in
Civil Procedure Commons, Criminal Law Commons, Evidence Commons