•  
  •  
 

Keywords

artificial Intelligence; ChatGPT; professional responsibility

Abstract

In this Essay, I explain that responsible and ethical use of AI in law practice requires reconceptualizing the lawyer’s professional relationship to technology. The current commercial-industrial relationship is based on a stylized model of technology as mechanical application, not calibrated to emergent AI-enabled technologies. Put differently, lawyers cannot interact with AI-enabled technologies the way that they traditionally interact with, say, word processors. For AI-enabled technologies, I explain that a “division of labor” framework is more fruitful; like horizontal professional relationships between peers or vertical ones in professional hierarchies, lawyers ought to interact with sophisticated technologies through arrangements that optimize for their relative skills. This reconceptualization is necessary for at least two related reasons. First, technologies that purport to perform sophisticated tasks (for example, analysis, judgment, and synthesis) will tend to have higher error rates because of the nature of the information that they process and their objectives being generally imprecise. Unlike mechanical applications, for which error is tantamount to failure, errors in higher-order tasks—such as those involving judgment, synthesis, or analysis—are not necessarily disqualifying. As a result, safe use of these tools requires a template that both accommodates and mitigates error. Second, AI technologies pose an asymmetrical risk: the peculiar mix of obligations, rights, and public interest considerations means that failure carries high costs. As the “fake citations case” demonstrates, misusing AI enabled tools could generate substantial legal-ethical harms.

Share

COinS