AI Risks and Ethics in the Law

The Bencher—July/August 2024

By Raymond T. “Tom” Elligett Jr., Esquire

Dr. Dave Bowman: “Open the pod bay doors, HAL.”
HAL 9000: “I’m sorry Dave, I’m afraid I can’t do that.”
Bowman: “What’s the problem?”
HAL: “I think you know what the problem is just as well as I do.”
Bowman: “What are you talking about HAL?”
HAL: “This mission is too important for me to allow you to jeopardize it.”

This 1968 exchange from 2001: A Space Odyssey epitomizes the fear and distrust many have for artificial intelligence (AI). While using AI in the practice of law will not result in languishing in outer space, lawyers need to appreciate AI’s potential risks and ethical issues.

First, a timing caveat: as this article is submitted in March 2024, AI has been a hot legal topic for a year. In spring 2023, counsel submitted an “affirmation in opposition” in a New York federal case that contained fictitious case cites generated by AI. The court in Mata v. Avianca, Inc. sanctioned the lawyers in a June 2023 opinion. By the time you read this, there undoubtedly will be more developments in this evolving area of legal practice.

AI is not new. People have been using types of AI for years. When a word processing program corrects spelling or suggests different wording or grammar, that’s AI. Same when devices suggest how to complete a text or email. Using a calculator instead of multiplying or adding “by hand” is AI.

AI is not new in law either. Those of a certain age can remember researching “by hand” using digests and “Shepardizing” with the red books. Online research is AI that no one questions using today—for a faster and more thorough outcome.

Generative AI

What is new is generative AI, or GAI. Florida Bar Ethics Opinion 241 quoted a definition of generative AI as “deeplearning models” that compile data “to generate statistically probable outputs when prompted.” The opinion observed “the datasets utilized by generative AI large language models can include billions of parameters, making it virtually impossible to determine how a program came to a specific result.”

While recognizing that GAI may be a useful tool and provide time (and thus cost) savings, the opinion warned GAI can “hallucinate” or create “inaccurate answers that sound convincing.” This happened in the infamous New York case and continues to happen.

Chief Judge Daniel Sleet of Florida’s Second District Court of Appeals recounted at a February 2024 seminar that the court’s staff attorneys found fictional cases in a brief submitted to the court. The court issued a show cause order. The brief had been prepared by an associate. Sleet said the associate’s managing attorney took responsibility—as he should have—corrected the brief, and terminated the associate. When asked if any of the court’s staff attorneys might have used GAI to draft memoranda, Sleet’s response was, “Our court has not authorized the use of AI by any staff attorneys. We have no plans to implement the use of AI anytime soon.”

In contrast to this Florida matter, misusing GAI was much worse for the lawyers who attempted to deny responsibility or cover it up, as in the New York case, or the Colorado case of People v. Crabill.

The Florida Bar AI Committee

Florida Bar President Scott Westheimer, Esquire, formed the Special Committee on AI Tools and Resources in summer 2023. The committee drafted Ethics Opinion 20241, adopted early this year, and generated several proposed comment amendments to the bar rules, which the Florida Bar submitted to the Florida Supreme Court.

Co-chair Edward Duffy Myrtetus, Esquire, says the Florida committee started with two main areas: lawyer regulation and how GAI affects the courts and clerks. He says that GAI tools and resources evolve so rapidly that it is challenging to remain current.

Opinion 20241 identified multiple ethical issues. Lawyers must protect the confidentiality of client information and need a client’s informed consent or an exception to disclose information. Counsel must be cognizant of information they are “inputting” to generate an AI product—some “self-learning” AI programs may store information.

Lawyers have oversight duties when relying on nonlawyer (and lawyer) assistants. Opinion 20241 opines these oversight duties extend to GAI. The fake legal cite cases confirm the importance of such oversight.

The bar opinion discussed concerns about proper billing for costs and not duplicating or inflating billing. Other concerns included advertising, both in the use of GAI in advertising and claims such as a lawyer’s AI is better than others.
Myrtetus says the next topics include a practical focus on best practices for lawyers and evidentiary matters (such as hallucinations, deepfakes, and efforts to validate and verify evidence). He predicts AI will reshape trial strategy, jury selection, and transactional negotiations.

Jonathan D. Grabb, Esquire, ethics counsel for the Florida Bar, notes that proposed comments to the Rules of Judicial Administration and Rules of Professional Regulation were submitted to the Florida Supreme Court in late 2023 and early 2024. The proposed comment on requiring lawyers to understand the risks and benefits of technology would be broadened to include GAI.

Law Schools

One can be certain that more tech-savvy law students were aware of AI at least by 2023. GAI in law school poses the same safety issues for citing fictitious cases. Beyond that, using GAI to create a written submission could pose issues under school honor codes that require students’ work to be their own. Professors may believe students should learn how to analyze problems and create their own written work before adopting GAI “shortcuts” that might improve efficiency of the final product. Whether they improve or detract from the quality would be another issue.

Where We Stand

The Florida Bar opinion concluded: “In sum, a lawyer may ethically utilize generative AI technologies but only to the extent that the lawyer can reasonably guarantee compliance with the lawyer’s ethical obligations. These obligations include the duties of confidentiality, avoidance of frivolous claims and contentions, candor to the tribunal, truthfulness in statements to others, avoidance of clearly excessive fees and costs, and compliance with restrictions on advertising for legal services. Lawyers should be cognizant that generative AI is still in its infancy and that these ethical concerns should not be treated as an exhaustive list. Rather, lawyers should continue to develop competency in their use of new technologies and the risks and benefits inherent in those technologies.”

At a recent 2024 University of Florida College of Liberal Arts and Sciences alumni event, Dean David Richardson recalled the advice in The Graduate that the future was “plastics.” He observed today’s term for the future is “artificial intelligence.” What’s to come? Stay tuned, or to quote another celluloid AI character: “I’ll be back.”

Raymond T. “Tom” Elligett Jr., Esquire, is a shareholder in Buell Elligett Farrior Faircloth, P.A. in Tampa, Florida. He is a past president of the J. Clifford Cheatwood American Inn of Court.

© 2024 Raymond T. “Tom” Elligett Jr., Esquire. This article was originally published in the July/August 2024 issue of The Bencher, a bi-monthly publication of the American Inns of Court. This article, in full or in part, may not be copied, reprinted, distributed, or stored electronically in any form without the written consent of the American Inns of Court.