Main Menu

A.I. Enthusiasm May Outpace Current Application, But Not For Long

Articles & Publications

March 2024
By: Joseph Leonard
DSBA : The Bar Journal

The topic of Artificial Intelligence dominated both the popular and professional landscapes for the last year, exemplified by the release and subsequent downloads of the generative A.I. program ChatGPT, resulting in well over 100 million users of the program.

Although A.I. in some form has seen applications across various fields, including the legal industry, for decades prior, the idea of practically applying artificial intelligence models truly hit the zeitgeist in 2023. While A.I. offers a new horizon of possibility for breakthroughs in legal work, enthusiasm should be tempered while education, particularly awareness of potential pitfalls, should be prioritized.

One way to distinguish A.I. programs is by categorizing as "predictive" or "generative." While both tools rely on machine learning algorithms, predictive A.I. is used to classify discrete datasets with the ultimate goal of identifying likely trends in the future, as applied to those data sets. These processes are widely used in eDiscovery, with firmly established workflows developed over the last decade plus. "Generative" A.I. creates new content, built on large language models, or LLMs, facilitated by training on the immense volume of data available on the internet. Generative A.I. is still predictive in nature, in that it is "predicting" the next word in the sentence, or the shape of the next line in a drawing, but the end result is an article, letter, or image, seemingly created whole cloth out of thin air, upon request. This spontaneous creation calls to mind a magic genie granting wishes, and has clearly captured the public's imagination, fueling speculation on endless potential and easy implementation; a bevy of content springing forth from the click of a button. In fact, an online cottage industry has sprung up over the last year, promising to teach users the most effective way to ask their wish, i.e. formulate the prompt they input into the model.

While the possibilities may seem endless, there are certainly limitations to how A.I. can operate, and guardrails that must be established in terms of what constitutes A.I. input, and how the public relies on A.I. output. Two well established concerns with A.I. generally involve implied bias and privacy.

Because A.I. relies on existing content to formulate new content, that output will be impacted by bias and prejudice in the existing content, reinforcing the baseline status quo. This phenomenon was highlighted when Amazon was forced to abandon a recruiting tool where A.I. prioritized the resumes that most resembled those of employees currently holding similar positions.[1] The net impact was that any societal pressures or prejudices impacting the current makeup of the workforce were further entrenched by the A.I. output.

Privacy concerns stem from the fact that the massive amount of training data for these large language models is generally scraped from the internet, which includes both personal data and copyrighted material. Copyright law will have to wrestle with the implications of this in the years to come.[2]

Historically, the legal profession has been slow to embrace new technologies. However, a Bloomberg survey of attorneys in 2023[3] showed that, despite the stereotype of being resistant to new technology, more and more attorneys were adopting the use of generative A.I.

While most users primarily reported using generative A.I. outside of work rather than on the job,[4] the number of those respondents who reported using it in the context of their legal work increased from the beginning of 2023 to the end of that year. Those who used it in work reported primarily using it to draft correspondence and legal documents. eDiscovery service provider Lighthouse reports similar trends in their February report "State of AI in eDiscovery."[5]

The report illustrates the legal industry's high level of interest and appetite for the adoption and integration of A.I. systems into their workflows, while also highlighting the need for development of company policies and education around the technology. A.I. represents a great opportunity for the field, but the associated risks must also be clearly understood.

Indeed, the field of eDiscovery can operate as a beacon to illuminate potential drawbacks and provide solutions, as eDiscovery has employed predictive A.I. coding and technology assisted review for decades, with workflows so well established this point that courts, including the Delaware Court of Chancery, encourage the use of such programs.[6] The benefits of utilization are clear and significant, and the established protocols for confirmation of output are the result of years of analysis and thought leadership.

Predictive coding and assisted learning algorithms allow a document review platform to analyze data and promote likely responsive material for review, and potential production. On scale, it results in an enormous return on investment, with accuracy higher than human reviewers, and likely non-responsive material pushed to the end of the review, where ultimately, it may not require review at all. However, to implement a workflow that sets aside such a large portion of potentially discoverable documents, both sides must have a clear understanding of the process.

Elusion testing is effectively a confirmation sample of the likely non-responsive material and is one example of a critical guardrail to the use of such programs. An elusion test is a means of validation that ensures the project output falls within expected parameters, and both the method and results are discussed with opposing counsel to ensure everyone is working from a place of full information. Human audits of results, and understanding of processes are paramount to the successful implementation of A.I. Without those checks, it's nearly impossible to argue that the process is defensible, simply because no one would be watching over said process.

With generative A.I., the situation is similar. Generally, the algorithms are closed. The training data is obscured. It's extremely difficult to explain the process to a layman, and many experts will admit that they cannot explain exactly what is happening within the LLM. One asks a question of the model and gets a seemingly reasonable answer. The result has the appearance of validity and success, and so may too quickly be deemed as such. In fact, these models return varying responses to the same prompt, meaning that even if the results are generally similar, the replicability of the process is called into question.

Developments in A.I. implementation over the last year have been rapid and exciting, but there is still a long way to go before full-scale adoption to the legal practice is reasonable. Lighthouse's report notes a high level of enthusiasm for A.I. implementation, with 87 percent of respondents being "very interested" or "interested" in the tool. Therefore, development of company and firm policies around such tools, and attorney education surrounding these issues is paramount. Those attorneys with experience in validating A.I. output should share their expertise with their colleagues who are not familiar with the subject. In turn, those who do not have experience with these tools should reach out to those in the Legal Technology community to benefit from the knowledge they've gained in their practice.


[1] "Insight - Amazon Scraps Secret Al Recruiting Tool That Showed Bias ." Reuters, October 2018. https:/ / 1 M K0AG/.
[2] "Generative Artificial Intelligence and Copyright Law." Congressional Research Service. September 29, 2023. pdf/LSB/LSB10922.
[3] "2023 State of Practice: Practice in the New Era." Bloomberg Law.­sights/technology/2023-state-of-practice-practice­in-the-new-era/.
[4] Such hesitance is understandable, given the high profile rake stepping that has occurred, and contin­ues to occur, as attorneys rely on ChatGPT output without proper oversight or understanding. See, e. g. Mata v. Avianca, Inc., No. 22-CV-1461 (PKC), 2023 WL 4114965 (S.D.N.Y. June 22, 2023); Park v. Kim, No. 22-2057, 2024 WL 332478 (2d Cir. Jan. 30, 2024)
[5] "State of Al in Ediscovery: 2024 Benchmark Report Reveals Opportunities and Risks." State of Al in eDiscovery: 2024 Benchmark Report Reveals Op­portunities and Risks. https:/ /www.lighthouseglob­
[6] See, e.g., Moore v. Publicis Groupe, 2012 WL 1446534; EORHB, Inc. v. HOA Holdings LLC, 2013 WL 1960621; Osi Restaurant Partners, LLC v. Unit­ed Ohana, LLC, 2017 WL 396357.

Back to Page