Lawyers often misuse what they call AI, which is mostly just flawed language models generating unreliable content.
What Lawyers Call AI
What lawyers refer to as artificial intelligence in their work is typically better or worse large language models operating on substantial and sometimes controversial datasets. These models function as “habitats” of probability, statistics, and reinforcement learning that prioritize “whatever” over quality.
The Reality of ‘Real AI’
Unfortunately, this is the sad truth because “real AI” (traditional) remains rare in a lawyer’s work and is actually seldom necessary for anything. However, the future might bring changes, as this profession likely awaits transformation.
Unreliable Outputs
What I wrote about language models indicates that a significant portion (let’s say 15%) of what we get from various chatbots is simply garbage. It’s not suitable for professional use and is often just false.
Growing Concerns
Increasingly, when speaking with non-lawyers, I hear that ChatGPT has generated judgments that do not exist. The scale of this phenomenon is growing, yet lawyers continue to use materials that produce these errors, mistakes, and falsehoods.



