AI and Legal Ethics: Protecting Law Firm Integrity

AI and legal ethics

As Artificial Intelligence continues to develop, the legal profession is increasingly integrating this technology into everyday operations. Artificial intelligence (AI) has the potential to significantly enhance legal operations. Both law firms and individual lawyers using AI tools must ensure that their use of AI aligns with established ethical principles. The biggest concerns revolve around competence and confidentiality, which AI use complicates.

Below, we’ll walk through the legal ethics principles most closely associated with legal AI and explain the ways that AI tools are being used—and how they raise ethical questions. We’ll also touch on the regulatory landscape and what legal teams can do to keep their AI use ethical.

Key Takeaways:

  • AI is reshaping how law firms operate, introducing both efficiencies and new ethical complexities.
  • The most critical ethics concerns center on competence (Rule 1.1) and confidentiality (Rule 1.6) under the ABA Model Rules of Professional Conduct.
  • Firms must ensure transparency, client consent, and ongoing AI ethics training across their teams.
  • The ABA’s 2024 guidance emphasizes careful oversight and open communication about AI use.
  • Balancing innovation with professional responsibility is essential to maintaining client trust and firm integrity.

Key Ethical Principles in the Practice of Law

The ethical principles all lawyers must abide by are laid out in the American Bar Association’s Model Rules of Professional Conduct (MRPC). While many firms have additional principles and values they follow, the MRPC serves as a north star for legal ethics across all contexts.

Two of the MRPC are especially important to consider with respect to AI and legal ethics.1

  • Rule 1.1: Competence – This rule establishes a duty of competence, such that lawyers need to harness “knowledge, skill, thoroughness and preparation reasonably necessary for the representation.” AI can complicate what competence looks like if it impacts case strategy, but an argument can be made that AI mastery is competence.
  • Rule 1.6: Confidentiality of Information – Similarly, this rule sets up a duty of confidentiality. Lawyers face the ongoing challenge of maintaining client confidentiality amidst the use of AI in their practice. Lawyers must not reveal information related to client representation unless certain circumstances are present. It’s unclear whether exposing a client’s information to AI tools could constitute an unauthorized breach of confidentiality.

To avoid conflicts of interest (detailed in Rules 1.7, 1.8, 1.10, and 1.11), attorneys should always be as transparent as possible about what AI tools they’re using, how, and for what reasons.

Comprehending the limitations of AI and training staff to proficiently use AI technology is vital for sustaining ethical legal practices. Legal professionals must understand the mechanisms of AI tools to fully leverage their benefits while avoiding risks such as bias and data misinterpretation. Ongoing education programs provide an environment where lawyers can successfully navigate the convergence of technology and law using AI.

The use of generative AI is transforming traditional legal workflows. Like it has in many other industries, AI is helping attorneys and law firms work much more efficiently. Automation and generative capabilities allow legal teams to perform rote tasks like data extraction much more swiftly, freeing up resources for mission-critical strategic work. Generative AI allows for the automation of complex data tasks beyond merely mundane functions, helping attorneys allocate their resources strategically.

Our 2024 litigation support trends survey found that a majority of firms have a baseline familiarity with legal AI, and about a quarter were already actively using it. About half also sought out AI familiarity and utilization when vetting potential litigation support partners.

In terms of specific use cases, AI is supercharging court reporting and other areas where pattern recognition and automation are especially useful. Contract review and legal research are faster and easier to complete, with accuracy and depth, when using legal AI tools.

Generative AI can provide profound insights that refine legal practice by optimizing tasks like document drafting and case analysis. Nonetheless, attorneys must ensure these AI tools do not supplant the critical skills of legal judgment and analytical reasoning. By judiciously integrating AI into their practices, law firms can achieve greater efficiency without sacrificing ethical integrity or client confidence. The use of generative AI enables efficiency and accuracy without compromising ethical standards.

Ethical Challenges of AI in Law

Given the principles above and the ways firms are using AI, there are a couple of specific challenges at the intersection of AI and legal ethics. To begin with, there are concerns about bias and fairness in AI and machine learning (ML). As laid out by the New York State Bar Association, biases can be “imprinted” in AI tools and go unnoticed in outputs through the datasets an AI is trained on, and this imprinting can happen intentionally or unintentionally.2

There are also concerns about the “black box” nature of many AI technologies, as users and impacted parties may not be fully aware of what these systems are doing, how, or why. This uncertainty is also tied to concerns about data privacy and security, as firms must ensure that protected information is not inadvertently exposed during training and other AI operations.

Overcoming these challenges requires human oversight—attorney judgement—alongside AI implementation. Legal teams must vet and monitor AI tools, making sure that humans with lived experience practicing law have the final say on any AI-enhanced output. Proper supervising of AI activity is critical to maintaining ethical integrity.

Regulatory Guidance and Emerging Standards

Given how new the AI boom is, there are relatively few formal regulations around how to use it lawfully and ethically. However, the ABA did publish its first official ruling on generative AI (gen AI / GAI) in July of 2024. In it, the authors discuss the ways that AI can complicate MRPC 1.1 and 1.6, as noted above, along with rules on communications (1.4) and fees (1.5).3

The biggest takeaway from the ABA’s guidance is transparency. There are some granular considerations about billing—firms can bill for time spent checking an AI output’s accuracy, but not for learning how to use the AI tool in question—but these are less critical than the duty attorneys have to be open and honest about their AI usage, always obtaining consent.

Looking ahead, there will likely be more complex regulations that impose more restrictions.

Practical Steps Law Firms Can Take

Strategizing for AI and legal ethics requires commitment to human oversight and careful, intentional interactions with AI technology. Law firms need sound governance in place that leads by example and makes it clear how and why professionals should be using AI tools.

To that effect, three steps any legal team can take to ensure ethical AI use are:

  • Vetting AI vendors for ethical compliance and transparency
  • Implementing clear usage policies for AI and enforcing them
  • Training staff on AI ethics and assessing their understanding

On all of these fronts, partnering with the right solutions provider is critical.

The best litigation support services partners will work with your team to select the right AI tools for you, disseminate clear guidelines to staff, and ensure their understanding and buy-in.

AI is here to stay in the legal profession. Firms are discovering new use cases for it every day, and the broader public’s use of similar technologies will likely lead clients to expect their attorneys to engage with this technology in some way or another. However, there are some risks and ethical concerns to using AI that need to be dealt with seriously and thoroughly.

Law firms must strike a balance between innovation and responsibility, leveraging their human expertise as a safeguard against potential misuses of AI. U.S. Legal Support understands how AI can benefit law firms. We’ve integrated advanced technology into our services to help firms leverage AI effectively—enhancing efficiency, accuracy, and outcomes for their clients.

To learn more about our AI-enhanced litigation support services, get in touch today.

Sources: 

  1. ABA. Model Rules of Professional Conduct. https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/model_rules_of_professional_conduct_table_of_contents/
  2. NYBSA. Bias and Fairness in Artificial Intelligence. https://nysba.org/bias-and-fairness-in-artificial-intelligence/
  3. ABA. Formal Opinion 512: Generative Artificial Intelligence Tools. https://www.americanbar.org/content/dam/aba/administrative/professional_responsibility/ethics-opinions/aba-formal-opinion-512.pdf
  4. NYBSA. Bias and Fairness in Artificial Intelligence. https://nysba.org/bias-and-fairness-in-artificial-intelligence
Julie Feller
Julie Feller
Julie Feller is the Vice President of Marketing at U.S. Legal Support where she leads innovative marketing initiatives. With a proven track record in the legal industry, Juie previously served at Abacus Data Systems (now Caret Legal) where she played a pivotal role in providing cutting-edge technology platforms and services to legal professionals nationwide.

Editoral Policy

Content published on the U.S. Legal Support blog is reviewed by professionals in the legal and litigation support services field to help ensure accurate information. The information provided in this blog is for informational purposes only and should not be construed as legal advice for attorneys or clients.