
Mary, a Texas resident, was getting divorced from Bill. She did some internet research on the computer about how community property was handled in a divorce and was happy to learn that it would be divided equally. Imagine her surprise when the Judge awarded her only 40%, which was considerably less than an equal division.
Mary had been duped by AI.
What is Artificial Intelligence (AI)?
Artificial intelligence, commonly known as AI, is everywhere. It shows up in online searches. It creates summaries. It drafts papers.
Google has a version on its search engine called “Generative AI” which, by its own definition is a “type of artificial intelligence that can create new content…. based on patterns it learns from existing data…”
If you type a legal question into the Google search function, then Generative AI is all too ready to answer.
That is not a good thing.
Legal Questions Yield Questionable Answers
During my research for this column, when I typed in a search for “what is community property,” the answer popped up at the top of the page under the heading “AI Overview.” There, for all to see, was the statement “Community property is divided equally between spouses during a divorce or legal separation.”
Nice information, but it is wrong in Texas. Texas requires that the property be divided equitably, which does not necessarily mean a 50-50 split.
Lawyers get duped by AI, too. They have been caught citing fake cases and summarizing fake laws, all created by AI tools.
Lawyers vs. AI
In one case, a New York lawyer found himself handling a case in federal court. He did not have the experience to handle the issues that were being raised and did not have subscriptions to the usual legal research tools. He turned to the internet site ChatGPT for his research.
The resulting responsive pleading, which the lawyer’s law firm colleague filed without review, cited phony cases with bogus quotes. When the Judge asked them to explain, the lawyers doubled down and filed an affidavit that included the ChatGPT summaries instead of any actual case decisions.
The Judge was not amused. He sanctioned the lawyers and their law firm.
Why is Artificial Intelligence Unreliable?
One of the reasons that AI answers are unreliable is what they use to “learn” how to respond. AI relies on “patterns from existing data.” If the existing data is wrong or incomplete, then the answer is going to be wrong or incomplete.
AI seems to use sources that are readily available on the Internet for its answers. When I typed in a prompt, the responses showed a link symbol in blue for its source. Not all the sources it showed are considered authoritative by lawyers and judges; many seemed to be just marketing materials.
The Specificity of the Search Determines the Answer
Another problem: the AI answer given is a result of the prompt that is typed into the search bar. The more specific the prompt, the more specific the answer. When I changed my prompt to “what is community property in Texas” the statement that came up was “community property is usually split equally between the spouses.” That was better, but still incomplete. Only when I narrowed my prompt to “how is community property divided in a Texas divorce” did AI give me a decent response.
Get Your Questions Answered with Hammerle Finley, Not AI
My last prompt was “can Generative AI be relied upon.” The answer I received was “An AI Overview is not available for this search.”
When it does give an AI Overview, Google notes in smaller print at the bottom “Generative AI is experimental. For legal advice, consult a professional.”
I wholeheartedly agree.
If you have legal questions, schedule a consultation with our team of expert attorneys today, and don’t rely on AI for answers to your important questions.
Virginia Hammerle is an accredited estate planner and represents clients in estate planning, probate, guardianship, and contested litigation. She may be reached at legaltalktexas@hammerle.com. This blog contains general information only and does not constitute legal advice.