Pre-Programmed Professionalism: The Ethics of Artificial Intelligence in the Practice of Law
The Bencher | September/October 2023
By Wendy L. Patrick, Esquire, PhD
Artificial intelligence (AI) has become a household term and a hot topic. Cutting-edge and controversial, it delivers an increasing number of services across a variety of industries that are conducted through efficient automation. One of the things that distinguishes AI from other types of automation is its ability to think and learn—which is both sensational and scary. But can AI practice law? Although currently the answer seems to be “no,” in some respects, that could change over time.
The preamble to the American Bar Association (ABA) rules of professional conduct defines the responsibilities of a lawyer in section [1]: “A lawyer, as a member of the legal profession, is a representative of clients, an officer of the legal system, and a public citizen having special responsibility for the quality of justice.”
Quality justice requires quality judgment, which requires human involvement. Although AI can enhance the speed and accuracy of tasks, such as legal research, it cannot replace judgment, professionalism, or chemistry with court, counsel, or colleagues. And in front of a jury, a silver tongue remains a unique, individualized feature of skilled advocacy. Yet even the most talented trial lawyers may improve the speed and efficiency of some of the more mundane aspects of legal work through automating services. But which ones and at what cost?
AI and Ethical Rules
The use of AI in the practice of law implicates several important ethics rules. Not surprisingly, the first one deals with the obligation to know how to use AI in the first place: the duty of competence.
ABA rule 1.1, Competence, requires that a lawyer “provide competent representation to a client.” Competent representation is defined as requiring “the legal knowledge, skill, thoroughness, and preparation reasonably necessary for the representation.” Yet competence goes farther in requiring an ongoing awareness of changes and developments in the law. Rule 1.1 Comment [8] explains that to maintain the requisite knowledge and skill associated with the duty of competence, a lawyer should “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology,” among other ongoing legal education requirements.
AI may also facilitate compliance with rule 1.3, Diligence, which states that a lawyer shall “act with reasonable diligence and promptness in representing a client.” AI can also help lawyers comply with the discussion in rule 1.3 Comment [2], which states that “a lawyer’s workload must be controlled so that each matter can be handled competently.” Comment [3] addresses one of the most frequently encountered client complaints: procrastination—recognizing it as perhaps the most “widely resented” professional shortcoming. Comment [3] also notes the potential adverse effect on a client’s interest due to the passage of time, or missing a statute of limitations, and also makes the very practical observation that even when a client’s case is not substantively harmed, “unreasonable delay can cause a client needless anxiety and undermine confidence in the lawyer’s trustworthiness.”
AI and Relationships With Clients
AI might not be as helpful with respect to the human decision-making requirement of ABA Rule 1.4. This rule involves judgment and discretion, requiring a lawyer to “promptly inform” clients of certain important decisions or circumstances, “reasonably consult” with clients about important aspects of the representation such as how to accomplish the objectives of representation, keep clients “reasonably informed” about case status, and “promptly comply” with reasonable requests for information.
The duty to reasonably consult with clients about how to accomplish the objectives of representation can include providing information about the use of AI. When it does, to satisfy the spirit of Rule 1.4, the lawyer has to know enough about the use of AI (relating back to Rule 1.1, Competence) to be able to explain it to a client.
One of the hallmarks of an attorney-client relationship is confidentiality. ABA Rule 1.6, Confidentiality of Information, paragraph (a), includes, among other circumstances, that a lawyer shall not reveal information “relating to the representation of a client” without a client’s informed consent, or when the disclosure is “impliedly authorized in order to carry out the representation.” Paragraph (c) states that a lawyer “shall make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.”
When lawyers use AI in case preparation in a fashion that involves inputting client information into an AI database such as ChatGPT to generate content—whether pleadings, arguments, or analysis—they may be sharing confidential information. And with whom? This ties back to the competence Rule 1.1 requirement; lawyers should know the answer before they engage.
Using AI to generate legal analysis might also implicate ABA Rule 5.5, Unauthorized Practice of Law. Although the rule itself focuses on guidelines related to multi-jurisdictional practice, consider whether the rationale should caution lawyers about the risks of using AI that is unfamiliar with jurisdiction-specific rules and practices, arguably in violation of the spirit of the rule.
ABA Rule 5.5 Comment [2] discusses the definition of the practice of law, noting that limiting the practice of law to lawyers “protects the public against rendition of legal services by unqualified persons.” Consider the extent to which AI would be defined as “unqualified.” And although Comment [2] does not prohibit a lawyer from “employing the services of paraprofessionals and delegating functions to them,” it also requires the lawyer to supervise any work that is delegated and retain responsibility for the output (referencing ABA Rule 5.3).
Consequences of AI Incompetence: Accuracy and Honesty
Especially when engaged in monotonous tasks, people make mistakes. This is especially true when a person is working long hours or is tired or ill. AI does not get tired, never needs to recharge by taking a mental health day, and needs no nutrition or rest breaks. It is not subject to age or hourly requirements and does not get distracted. AI works all of the time. But is there a point where we compromise speed for accuracy?
Although AI efficiency depends on data, not disposition, software design, not stamina, is it not error-free. So, if it is more like Wikipedia than Westlaw when researching more complex questions, we have to fact-check our legal briefs very carefully anyway. If we don’t, we run the risk of violating the duty of competence and potentially having to correct false representations pursuant to our duty of candor in the courtroom.
ABA Rule 3.3, Candor Toward the Tribunal, states in subdivision (a) that a lawyer shall not knowingly “make a false statement of fact or law to a tribunal or fail to correct a false statement of material fact or law previously made to the tribunal by the lawyer”…or “offer evidence known to be false.”
Because “robot briefs” are not (yet) reliable enough to submit to a court without verifying the citations or crafting individualized legal analysis, it is possible that false facts or law might slip into the record. If that happens, Rule 3.3(a) provides instructions: “If a lawyer, the lawyer’s client, or a witness called by the lawyer has offered material evidence and the lawyer comes to know of its falsity, the lawyer shall take reasonable remedial measures, including, if necessary, disclosure to the tribunal.”
AI Is Not a Mentor or Role Model
Managing partners and supervisors are governed by ABA Rule 5.1, Responsibilities of a Partner or Supervisory Lawyer, which requires those who hold managerial authority within a law firm to “make reasonable efforts to ensure that the firm has in effect measures giving reasonable assurance that all lawyers in the firm conform to the Rules of Professional Conduct.” As a practical matter, complying with this rule is easier within a law firm relationship of teamwork and trust.
Supervisors offer guidance and support and are terrific resources with whom to discuss individual cases. AI cannot replicate this dynamic. It has no capacity to serve as a mentor, role model, or objective sounding board, and even if it knows your past case statistics, it cannot remember your prior cases in the same way your supervisors will. Because it has no personal life experience, it cannot even help you incorporate the relevant factors when selecting a jury.
Authentic Advocacy Requires a Higher Morality
AI cannot engage in unique, human, high-level, experientially based reasoning. While it can learn, artificial intelligence is not emotional intelligence in the sense that it cannot replicate feelings, sympathy, empathy, and other factors that drive human decision-making. This distinction is particularly relevant when picking a jury or preparing a client’s case, both of which require discernment and discretion. And although AI is capable of creativity, artificial ingenuity is not the same as human innovation, and AI’s “originality” often depends on programming preferences.
Able to work “forever,” AI does not face the challenge of mortality but is challenged by human morality. It can learn about ethics and professionalism through proactive programming, but it will never view human moral duty in the same way people do. Possessing intelligence also does not include human judgment or the ability to distinguish right from wrong—especially in situations requiring the benefit of personal experience.
Within the practice of law, the goal appears to be deciding what type of AI to use to accomplish particular tasks, how to create the right types of legal research queries, and how to program AI with the correct set of data. Like any other computerized task, output depends on input. Consequently, important legal decisions should be made by real lawyers, not artificial assistants, to ensure accurate, efficient, and ethical decision-making in the pursuit of justice for all.
Wendy L. Patrick, Esquire, PhD, is a career trial attorney, former chair of the California State Bar Ethics Committee, and former chair of the San Diego County Bar Association Ethics Committee. She is also an author, trial consultant, and expert witness. The opinions in this piece are her own and not attributable to her employer.