
At our firm, customer service, transparency, and results provide the foundation for everything we do. These are nuances that an AI model cannot capture. In this post, we will provide a broad overview of why these models are a poor choice in our sector and why our team does not believe in integrating these systems into our current operating procedures.
- Artificial Intelligence (AI): Computers that learn from data and perform tasks that use human-like thinking, such as recognizing images, chatting, or making decisions.
- Large Language Model (LLM): A type of AI trained on large amounts of text that predicts the next words and can write or hold a conversation.
- Probabilistic: Based on chances and likelihoods rather than certainty.
- Stochastic: Involving randomness, so results can vary each time.
- Token: A small piece of text, such as a word or part of a word, used by computers to process language.
- Hallucination (in AI): When an AI says something that sounds correct but is not true.
Trouble Is in the Architecture: The Tech Behind LLM’s Shortcomings
Large Language Models (LLMs) have become interchangeable with “artificial intelligence,” or “AI” for short. These models are probabilistic systems that generate contextually relevant and variable responses by identifying and applying patterns learned during training on large datasets. They are stochastic models, meaning their outputs are generated by sampling from probability distributions over possible next tokens, which introduces randomness and allows for varied yet contextually relevant responses rather than fixed, deterministic answers.
In Even Simpler Terms, What Does This Mean?
Large Language Models are computer programs that learn from a lot of text and then use that knowledge to guess what words should come next. Because they don’t always choose the exact same words, their answers can sound natural and a bit different with each response.
As you can see, the issue for clients and attorneys has become apparent: “Then use that knowledge to guess what words should come next…” When you work as an attorney, there shouldn’t be “guess” work in the motions, briefs, and filings you create for your clients or the courts. Law is supported but facts, evidence, and precedent — not probability.
Due to the variability of producing answers, LLMs are prone to a phenomenon known as “hallucinations,” which means producing answers that sound correct, but are wholly fabricated.
This variability is inherent to the architecture, meaning the system sometimes makes things up because it’s guessing what words should come next, not checking if they’re true.
Why It Is a Bad Idea To Use LLMs in Legal Practice Areas
Large Language Models Can Be Dangerous To Use in Law for Multiple Reasons:
- Hallucinations within documents
- These models lack accountability
- Contextual blindness of unique client situations
- Privacy concerns with client information
- Professional and ethical standards
- Bias and Fairness within the outputs
Hallucinations within Legal Documents
There are a plethora of attorneys who have unfortunately been sanctioned for the use of LLMs in filing and compiling motions. LLMs and AI have a tendency to fabricate statutes, cases, and facts due to hallucinations. This means that the filings can be supported by sources that don’t actually exist, making them unreliable and potentially damaging to a client’s case.
LLMs Lack Accountability
Large Language Models have no responsibility for errors made in documents and will not face consequences for incorrect information. Furthermore, these models lack the ability to verify the truth. And, while the models are not held accountable, you will be held accountable for their mistakes.
Contextual Blindness: We’re Hands-On, LLMs Are Not
Conversations and rapport are the foundation for a winning case. At Michael Armstrong Law, and at most reputable law firms, attorneys will partner with you to hear your story, help you compile the facts, and collaborate towards a testimony that gets you the benefits you deserve. At this time, there is no method for LLMs/AI/Chatbots to work directly with you, in real time, at a desk, with your paperwork and evidence laid out.
Privacy Concerns with Sharing Client Information with LLMs
When you work with LLMs, there is a risk of data exposure. Also, many models come with fine print and complex legalese that does not define how data is handled. A reputable attorney will know it is inappropriate to share client details with an AI chatbot of any kind.
Lawyers Are Required To Follow Ethical Standards
Attorneys must provide accurate, well-researched, and reliable legal work. Also, legal professionals must be honest with courts and clients. Using fabricated cases or statutes from LLMs directly undermines this ethical duty.
LLMs Are Inherently Biased
Large Language Models reflect biases due to their training data. In legal settings, biased outputs could unfairly influence arguments, reinforce discrimination, or misrepresent legal precedent.
The Place for Large Language Models in the Legal System
LLMs should not be relied on for substantive legal analysis, drafting, or citing authority, but they can have safe and ethical uses in the legal system when applied carefully and used with human oversight:
- The newly implemented Hearing Recording and Transcriptions (HeaRT) System is a good example of AI and law working together.
- Helping to disperse publicly available and widely known legal knowledge to individuals just like how Google performs as a search engine. However, you should use this only for general research purposes.
- Continuing education, and with appropriate use, helping attorneys to find newly released legal information and court cases. But, the attorney must meticulously check all facts, documents, and news sources.
Michael Armstrong Law Understands LLMs Enough To Leave Them out of Our Processes
Our firm is locally owned and operated. The purpose of our compassionate and diligent team is to work directly with you. Our close partnership with you helps to dictate the outcome of your Social Security disability appeal.
While AI has taken the world by storm, we fail to see its direct application within Social Security disability law. For the foreseeable future, Michael Armstrong Law plans to continue its three-decades long service to the community with operating procedures that keep only people in the loop.