59.5 F
Cambridge
Wednesday, May 8, 2024

Legally Bionic: AI in the Legal Realm

“Does this suit feel like an ‘acquit me?’ Would someone in this tie really have gotten a DUI? Am I going to jail, or does that juror just have RBF?” 

In the future, the answers to these questions may not matter. With the dramatic rise of artificial intelligence, jobs that were previously thought to be impervious to the technological revolution, such as those of lawyers, judges, and policymakers, are being called in for cross-examination.

Law is uniquely vulnerable to changes brought about by AI, primarily due to the sheer volume of verbiage involved. According to a recent New York Times piece, a study done by researchers at Princeton University, the University of Pennsylvania, and New York University found that the field of legal services was one of the top industries exposed to language-modeling AI. Another study cited by the Times, done by researchers at Goldman Sachs, estimated that 44 percent of legal work has the potential to be automated by AI. This research makes sense; the tedium of technical jargon and legalese could likely be alleviated by machine learning models. 

Naturally, several legal AI services have entered the market. Tools such as EviSort, LexisNexis, Casetext, and Harvey, to name a few, primarily target large firms or businesses aiming to save time on processing legal documents. EviSort, for instance, uses AI to help companies negotiate, approve, and comply with contractual obligations — essentially, taking on the role of a highly efficient paralegal (legal assistant). In an interview with the HPR, Jerry Ting, founder and CEO of EviSort, describes the software as “doing the stuff that lawyers don’t want to do. EviSort automates the manual, tedious part of the job that lawyers don’t want to do anyways.” 

Yet the use of AI has been slow to hit the public legal sector. Whether this lag is due to affordability, applicability, or ethical concerns is unclear. Certainly, attorneys and judges alike must be cautious of AI in terms of its accuracy. Recent events may even prompt rules about transparency concerning its use, given that predictable yet cringeworthy mistakes have already occurred. Massachusetts Lawyers Weekly reports that last June, two New York personal injury lawyers submitted nonsensical judicial opinions with six false quote citations generated by ChatGPT. The scandal generated titters widely across the legal community, perhaps contributing to hesitation among attorneys to even consider using professional AI software. 

Indeed, some public interest legal professionals emphasize the importance of human behavior in the field that AI cannot replicate. In an interview with the HPR, the office of Suffolk County District Attorney Kevin Hayden acknowledged that there may be some applications of AI that will be of future use, such as “assisting with standard motions, briefs, and responses to public records requests,” but that “much of criminal prosecution law is so case-specific, such as assessment of evidence, credibility and availability of witnesses, review of friendly or adverse testimony and assessment of new case information.” The capability to make social determinations, in other words, may be distinctly human. 

U.S. District Court Magistrate Judge M. Page Kelley seems to agree. In an interview with the HPR, Kelley explained, “ There are so many factors particular to that case, and it seems you need a human being to sort through them.” AI cannot gauge a witness’s tone or sincerity at the level of an attentive attorney, at least not currently. People subconsciously pick up on movements and details of demeanor that may be imperceptible to a computer. Even if AI could evaluate these elements of a witness’s presentation individually, it could have trouble interpreting these factors in combination with the witness’s actual testimony at a level comparable to a human. 

In particular, AI may not be ready to replace the role of trial attorneys. Zachary Cloud, supervising attorney at the Massachusetts Committee for Public Counsel Services, explained to the HPR, “Courtroom skills, such as trying cases, arguing to a jury or judge — those skills are not likely to be replaced by AI any time soon, if ever.” Cloud added, “People want face time with their attorney. Sometimes clients need help with things that are collateral to the trial.” While AI may offer predictive value and efficiency with streamlined tasks such as data sorting, it may not be able to help clients with personal queries as a public defender would, such as helping one find a job or access professional help for substance abuse struggles. As AI capacity ramps up, it may be able to address more meaningful and complex societal issues beyond streamlined tasks like language processing and data analysis.

However, the potential of AI to increase access to justice cannot be understated. By increasing the efficiency and lowering the cost of legal services, AI could be the saving grace for underserved communities that face systemic barriers to legal representation such as cost, knowledge, location, and language. Still, a myriad of qualms exist, such as the concern that AI will further reinforce economic disparities. As stated in the Yale Journal of Law & Technology, “Some fear an eventual system with expensive — but superior — human lawyers and inexpensive — but inferior — AI driven legal assistance. Others fear… that AI will be superior to human lawyers but will be expensive and available only to large law firms and their wealthy clients.” Despite the risk of reinforcing socioeconomic disparities, AI has the potential to increase access to legal services for underrepresented groups.

A less frequently contemplated concern in the discussion of the access-to-justice gap, though, lies in the issue of judging. More cases entail the need for more judges. If more people are given access to justice and can take cases to court, more judges are needed to ensure that the court system isn’t backlogged. In an interview with the HPR, Memme Onwudiwe, Executive Vice President of Legal and Business Intelligence at EviSort, advocates for AI judges by explaining, “For them to actually have justice, somebody in a timely manner needs to rule on that case. And you can’t exponentially increase one and not on the other side and hope for a good time.” If AI is to increase access to plaintiffs, it might also necessitate the use of AI in judging.

Yet the public perception of AI replacing judges poses a significant obstacle to this possibility. A certain distaste for the usage of AI in spheres that are more technical than the legal field already exists; 43% of U.S. adults surveyed in 2023 viewed its impact on society unfavorably. The public may very well balk at the idea of putting people’s freedom in the hands of an artificial brain. When asked about the potential of AI to be used in judging, Cloud responded, “I think there’s a world in which you could do it, but I don’t think that’s the world that people want to live in. Most people want a judge to have flexibility.” Regardless of whether or not AI can make individually tailored judgments, the public may not have faith in it to do so.

The nebulosity of this emotional component, however, could lead to something more sinister. The subsequent question regarding AI judges is whether the technology will eliminate bias or reinforce it. One striking hypothetical is the example of judging who should be bailed and who should remain in jail to prevent further offenses. A UChicago model found that switching to AI-predicted bail decisions could result in a 40 percent smaller prison population,with the same recidivism rate as decisions made by human judges. The caveat, however, is the potential for bias. The Brookings Institute explains that because racial minorities comprise the majority of the American prison population, the AI algorithm could recommend that a higher proportion of people from color are denied bail. Cloud explains, “When you’re training on data that already might be biased, it’s likely that the AI might perpetuate that bias. Human biases still creep into the system, and I’m not sure if there’s a way to eliminate them even with AI.”

Despite this frightening prospect, perhaps bias may be easier to identify and control when it comes from an external, emotionless source rather than from internal implicit bias. Though we sometimes may struggle to confront human bias, perhaps our already critical impression of AI may allow legal professionals to more easily identify AI bias. Onwudiwe defends his case for AI judges, saying, “When the AI is biased, you can fix it. You can’t do that for humans. To open up the conversation about bias in judges, you need to start with something that will get them to open their mouths… about systems around judging human bias, which judges are unwilling to do right now.” Judges may not be otherwise willing to acknowledge the influence of bias on their decisions, but the provocative proposal of using AI in judging may lead to the application of clearer standards and tests for bias.

Current software seems helpful for processing legal documents and overall promising for the capacity of AI to change the legal field. In the future, AI may be able to assist underserved populations by making legal services both more widely accessible and affordable. It could replace bias-driven judging decisions, or at least bring attention to human bias in the system. Both the public and legal professionals seem to be divided on the societal implications of AI, however. Parties may be understandably hesitant to place their futures in the hands of artificial intelligence, and in its current state, AI cannot displace attorneys, judges, and the critically and uniquely human work that they do. AI certainly has a way to go before it can replicate Hollywood-level courtroom drama (“My Robot Vinny,” who?). In the meantime, let’s continue to directly examine the latest developments in the case for AI.

- Advertisement -
- Advertisement -

Latest Articles

Popular Articles

- Advertisement -

More From The Author