Home > IPLJ > Vol. XXXIII > No. 4 (2023)
Keywords
Artificial Intelligence, ChatGPT, OpenAI
Abstract
Author's Foreword: I “wrote” this article while taking a bath with a bottle of champagne, by submitting the questions in bold to ChatGPT and copying its responses. I did not bother providing citations for ChatGPT’s claims, because they would obviously be superfluous.
Editor-in-Chief's Foreword: In 2023, the question is unavoidable: when it comes to scholarship, and in our case, legal scholarship, what do we do about artificial intelligence (AI) like ChatGPT? Do we need to do anything? In the Comment that follows, author Brian L. Frye and ChatGPT tried to provide an answer to these questions. Actually, ChatGPT did most of the answering, responding to the questions Professor Frye asked it late last year.
When the opportunity came to present the results of that “interview,” we could not say no. At the same time, we would be lying if we said that we knew exactly how to present the piece. This remained a topic of discussion throughout its publication process, from seemingly simple questions like “How do we label this?” to ones that turned out surprisingly complex, like “Does this need footnotes?” Being a student law journal, of course we landed on adding footnotes: they are our lifeblood. Not only do the claims ChatGPT make warrant some version of fact-checking, but also even though it assembles its answers from piles of existing data out there in the world, readers deserve some context surrounding those answers and those piles. How do we, as editors, edit ChatGPT’s sentences when those sentences are basically just statistically-likely strings of words? Suffice it to say, our editorial team still has differences of opinion on those questions and a whole lot more.
That said, this piece has far fewer citations than a traditional article, and most are tangential to their related “proposition” in the text. As ChatGPT describes its own operation below, it essentially uses everything as a source; and if everything is a source, how can one cite anything? Therefore, many citations will point not necessarily to support for any given “proposition,” but rather to writing by Professor Frye on similar subject matter—after all, his queries generated the responses—or other sources of commentary that can further inform the reader about the issues raised. Is it worth asking whether these are “propositions” at all, or simply an assortment of symbols that has some appearance of intentional ordering, almost like the English-language equivalent of a successfully completed Sudoku? Probably. Citations also dwindle in the piece’s latter half; at that point ChatGPT appears to start cannibalizing and/or reusing its own answers, so providing citations seemed . . . inapposite.
There are some things we do know for sure: while his scholarship has covered numerous topics, Professor Frye has written extensively on the problems of originality, the potential obsolescence of copyright, and the embrace of plagiarism, continuously challenging our conventional wisdom on those subjects—as well as the usefulness of traditional academic writing in the first place. (You will see reference to his works below.) Within that context, this Comment serves as a new provocation, in every sense of the term, requiring us to ask some uncomfortable questions about how we see authorship, creativity, and scholarship.
And it is in this light that we ask readers to approach what follows by keeping the following questions in mind—questions we continue to ask ourselves: what do we think of when we think of originality? Does authorship require a human presence? If ChatGPT can appear to make academic sense—even though it has no conception of the reality the words it uses refers to—what does that say about the current form of scholarship? Whatever your answers might be, what follows is our attempt to present the conversation between Professor Frye and ChatGPT in a good-natured way by adding a little context, providing some additional resources, and poking a little fun at everyone involved. We are (pretty) sure ChatGPT would appreciate the joke . . . if it knew what a joke was.
Text written by the author appears in bold type; text generated by ChatGPT appears in italics. We hope you enjoy.
Recommended Citation
Brian L. Frye and Chat GPT,
Should Using an AI Text Generator to Produce Academic Writing Be Plagiarism?,
33 Fordham Intell. Prop. Media & Ent. L.J. 946
(2023).
Available at: https://ir.lawnet.fordham.edu/iplj/vol33/iss4/5