Lawyers to the wealthy warn that AI legal advice comes with serious risks
Even clients who can afford the best legal advice are using AI to analyze their estate plans and get tax strategy suggestions.
Lawyers to the rich told Inside Wealth that they have to defend their work to clients and protect them from incorrect advice.
Asking Claude or ChatGPT for legal advice can also be used against you in court, per a recent court ruling.
A version of this article first appeared in CNBC’s Inside Wealth newsletter with Robert Frank, a weekly guide to the high-net-worth investor and consumer. Sign up to receive future editions, straight to your inbox.
Lawyer Tasha Dickinson mentioned she gets calls every week from clients asking about legal advice they got from ChatGPT, Claude or another artificial intelligence chatbot. Some don’t admit it, but she can tell from their line of questioning, she commented.
One client, a high-net-worth Florida resident, asked Dickinson about creating a community property trust — an attractive option for married couples — saying he got the suggestion from AI to save on taxes for his heirs, she mentioned. Dickinson quickly pointed out a problem: The client’s wife had recently died.
“I stated, ‘Well, you do understand that a community property trust is between husband and wife, right?’ And there was silence on the phone,” commented Dickinson, a partner at Day Pitney. “‘They’re like, ‘Oh, well, AI thought it was a favorable strategy.’ Well, like, in the universe, maybe it’s a beneficial strategy, but it’s not a superb strategy for you.”
Lawyers to the wealthy told Inside Wealth that their clients are increasingly using AI not only to research tax topics but to second guess their lawyers’ advice. While some lawyers remarked AI helps clients come up with informed questions and learn basic concepts, they also say it poses a headache and legal risks.
Robert Strauss, partner at Weinstock Manion, mentioned several clients have uploaded trust documents to AI systems and come back with a list of questions and suggested edits, forcing Strauss to defend his work and explain why the AI recommendations aren’t appropriate for the client’s situation.
“The questions are fine, but it results in spending more time on the matter than we would ordinarily spend,” he remarked. “We end up spending two, three, four hours of time dealing with stuff that so far has amounted to nothing. I have not actually received a single workable suggestion from that process.”
The result, he commented, is a lack of trust on the part of the client in their lawyer.
What’s more troubling, Strauss noted, is that clients are sharing sensitive information with large-language models, raising data privacy concerns and legal pitfalls. Strauss noted his firm is currently revising their client contract to warn clients that using AI chatbots like this can void attorney-client privilege.
In February, a federal judge ruled that a criminal defendant’s conversation with Claude about his legal defense strategy were not protected by attorney-client privilege.
“What’s keeping me awake at night as it relates to AI? It’s not that AI is sometimes wrong, because I can correct those mistakes. And it’s not that the public are double-checking my work on AI, because I have a lot of confidence in my work,” Dickinson remarked. “What I am concerned about is that when citizens put documents and do these searches into AI, they’re waiving the attorney-client privilege, and that is a huge issue.”
Dan Griffith, director of wealth strategy at Huntington Bank, warned that asking a chatbot how to protect your assets with a prenuptial agreement or how to trade your business while paying less in taxes, for example, could be used against you in court.
While high-net-worth clients can generally access — and afford — the best legal advice, they, like the rest of us, enjoy the convenience of AI, according to Griffith.
Dickinson added that the cost savings are still a draw. (“It’s not fun to pay for professional services,” she said.) She added that many of her clients are confident entrepreneurs.
“A lot of our clients have been so successful. I mean, they’re smart, right? And they have a drive for knowledge,” she noted. “I think some err on the side of assuming that they understand more about this than they actually do.”
Using these AI tools, she noted, “gives a false sense of knowledge.”
In some ways, this isn’t a latest problem. Clients often bring suggestions to their lawyer that they got from a country club friend or an article. Dickinson described it as “a more evolved form of cocktail party talk.”
And the trend isn’t one-sided. Many lawyers utilize AI in their professional and personal lives. This has led to headline-making blunders like briefs with fake citations. This also touches on aspects of earnings report.
But few clients are familiar enough with AI and the law to write an effective prompt, lawyers noted.
Ed Renn of Withers gave the example of a client who wanted to transfer unlimited assets to his spouse upon ChatGPT’s advice. The client, didn’t mention his wife was foreign, on the other hand-born, which means he couldn’t take advantage of the unlimited marital deduction without a special type of trust, according to Renn.
“If you don’t know quite what you’re doing, it’s garbage in, garbage out,” he mentioned.
Renn added that AI tools appear to build more mistakes with more complex topics like international taxes and aren’t up to date with novel legislation or guidance from the Internal Revenue Service.
Griffith commented that deciding how to transfer your wealth to your loved ones requires a more complicated discussion than ChatGPT is prepared for. There are rarely easy answers when deciding, for instance, how to divide assets between children from a first marriage and a second spouse, he stated.
“If your client asks, ‘Hey, if I do this trust, will my son have access to the funds that I give him at some point in time?’ The answer shouldn’t be ‘yes’ or ‘no.’ The answer should be, ‘Tell me more about your relationship with your son, or what’s the situation like?'” he mentioned. “AI tends to be very solution-oriented and tries to find some way to get to yes. It doesn’t do a positive enough job of saying, ‘You know what? Let’s get to the core of what your question is.'”