attorneys mike warren and aaron mchenry

AI is Changing the Practice of Law

Written by Aaron M. McHenry, Attorney at Law

Is your lawyer taking the time and effort to make a successful case for you? Artificial intelligence is changing the practice of law sometimes at the client’s peril.

The other day, I was reading an article about an injury case against Walmart in Wyoming where the plaintiff (injured party) was represented by America’s largest personal injury law firm.

The plaintiff’s lawyers used Chat GPT to research case studies to make his case for his clients. Unfortunately for the client, the 8 cases the lawyers presented in court as case law proved to be “hallucinations” of the chatbot and were completely made up.

When the lawyers were called out in court by the factual falsehoods, they had no choice but to throw themselves at the mercy of the court and the presiding judge. To the firm’s credit they did own up to their mistake. While it may or may not have had a negative impact on the outcome of the case, it is safe to assume it was not a good look for the client or their lawyers.

As of this writing, the judge has not decided on any disciplinary actions against the lawyers.

This was not a one-off incident. There have been several documented cases of lawyers presenting bogus A.I. hallucinations in their cases and undoubtedly there will be more coming. It’s not like lawyers were infallible before A.I. but it does indicate potential major pitfalls for clients of lazy lawyers who aren’t likely, nor should they do their own case research.

A recent survey by Thompson Routers showed that 62% of lawyers use A.I. with their case preparation. 12% use it regularly. There is no doubt that its usage will grow quickly in every aspect of law practices.

Using Artificial Intelligence can be an asset for preparing law cases, but it should be mandatory that any lawyer using it has to be trained on how to use it. Things like ambiguous prompts, lack of context, not using specific instructions, using unnecessarily long prompts in most A.I. platforms and Large Language Models (LLM’s) that don’t have access outside data bases are a few examples of elements that can cause hallucinations.

Artificial Intelligence Paper
Image by Markus Winkler

There are a variety of ways to get reliable data from A.I., but if you don’t know enough about how to use A.I. to get reliable results and a client’s wellbeing in the balance, you probably shouldn’t be using it. Or, at the very least, do your homework and verify the results.

Accuracy is tantamount in the practice of law, and my practice of injury law is no exception. Being a lawyer requires doing the hard work for their clients to afford them the best possibility of success. Research and fact checking are critical components of our work and from what I’ve seen, taking shortcuts usually come back to bite you.

When I was in law school there wasn’t even an internet so sifting through books and using Lexus-Nexis when I could get access to it were the only ways to build my cases. I adapted slowly to technological changes, but my learning and usage of technology was methodical and persistent, the same way I treat each case I take. Still, sometimes going back to the basics, even if the work is long and arduous, pays off better for our clients!