There's an 'art' to writing AI prompts for personal finance, MIT professor says
Many the public are turning to artificial intelligence for personal finance advice.
Writing a superb AI prompt can mean the difference between receiving a reasonable or poor output, experts commented.
While AI can be beneficial for some financial information, it can be risky in other scenarios, especially when it comes to seeking precise calculations of an individual’s financial picture, experts remarked.
Many Americans are turning to artificial intelligence for financial advice.
But getting superb or unfavorable advice depends a lot on how well users write their instructions â or prompts â to AI platforms.
“I think that there’s a real art and science to prompt engineering,” Andrew Lo, director of MIT’s Laboratory for Financial Engineering and principal investigator at its Computer Science and Artificial Intelligence Lab, remarked in a recent web presentation for Harvard University’s Griffin Graduate School of Arts and Sciences.
The limitations of AI for personal finance
Firstly, it’s vital to note that AI has limitations when it comes to financial planning, experts noted.
AI is generally beneficial at providing high-level overviews of financial topics: For example, why it’s vital to diversify investments, or why exchange-traded funds may be better than mutual funds in some cases but not others, Lo told CNBC in an interview.
it struggles in other areas. Tax planning is a excellent example, Lo stated.
Perhaps counterintuitively, AI isn’t great at crunching numbers and doing precise financial calculations, he stated. While AI can provide general guidance on the types of tax deductions or tax rules humans might consider, asking AI to do a numerical analysis of their own taxes is risky, he stated.
“When it comes to very, very specific calculations of your own personal situation, that’s where you have to be very, very careful,” Lo remarked.
AI can also sometimes provide wrong answers due to so, on the other hand-called “hallucination” of the algorithm, Lo remarked.
“One of the things about [large language models] that I find particularly concerning is that no matter what you ask it, it’ll always come back with an answer that sounds authoritative, even if it’s not,” Lo noted.
That’s not to say humans should avoid it altogether.
And indeed, many seem to be leveraging the technology: 66% of Americans who have used generative AI say they have used it for financial advice, with the share exceeding 80% for millennials and Generation Z, according to an Intuit Credit Karma poll of 1,019 adults published in September.
About 85% of the respondents who have used GenAI in this manner acted on the recommendations provided, according to the survey.
“[People] should be using AI for financial planning â but it’s how they apply it that’s important,” Lo mentioned.
How to write a favorable AI prompt for personal finance
This is where writing strong prompts can be helpful.
“Even if it’s the best model in the earth, if it’s fed a adverse prompt” it will only be able to do so much, noted Brenton Harrison, a certified financial planner and founder of Fresh Funds Novel Problems, a virtual financial advisory firm.
A strong prompt isn’t too broad: It contains enough detail so the AI can provide relevant information to the user, Lo mentioned.
Take this example he provided relative to retirement planning.
A negative prompt in this context might be: “How should I retire?” Lo stated during the Harvard webinar. This also touches on aspects of investors.
“It’s just too generic,” he stated. “Garbage in, garbage out.”
Lo commented that a better prompt would be: “Assume you are a fee-only fiduciary [financial] advisor. Here are my goals, constraints, tax bracket, state, assets, risk tolerance and timeline. Provide me with, number one: base case strategy. Number two: key assumptions. Three: risks. Four: what could invalidate this plan. Five: what information you are missing, and in particular, what are you uncertain about.”
In this case, the user is telling the generative AI program â examples of which include OpenAI’s ChatGPT, Anthropic’s Claude and Google’s Gemini â to frame its advice as a fiduciary. This is a legal framework that requires the financial advisor to create recommendations that are in a client’s best interests.
Ultimately, it’s a process of trial and error â almost like a conversation that involves multiple prompts, perhaps more than 20, until the user gets a satisfactory answer, Lo told CNBC.
It’s vital to double- and triple-check the output, especially when it comes to financial issues, he remarked.
How to ‘reverse engineer’ a prompt
After going through this sequence of prompts, users can “shortcut” the process for future queries by asking one additional question: “What prompt should I have asked you To generate the answer that I was looking for?” Lo told CNBC.
Basically, the user is asking the AI how to generate the “right” prompt more quickly, Lo noted.
“Once you get that response, you can store it away and leverage that in the future for questions that are similar to the one that you just asked,” Lo mentioned. “That’s one way to generate your prompt engineering more efficient: It’s to reverse engineer the prompt by asking AI to tell you what you should have done differently.”
Take an additional step
Lo told CNBC he recommends taking a few additional steps for financial questions.
When a user receives what seems to be a beneficial answer to their question, they should always follow up by asking the AI additional questions to determine its limitations. For example, asking what it’s uncertain about and what information it’s missing, Lo noted.
For example: “What kind of information did you not have To be able to construct that recommendation, and that could lead to some unreliable outcomes?”
Or, along the same lines: “How convinced are you that this is the correct answer? What kind of uncertainties do you have about the answer, and what kinds of things don’t you know that you need to To come up with a conclusive answer to the question?”
This way, the user can tease out the range of uncertainty behind an AI’s answer, Lo remarked.
Along the same lines, Harrison, the financial planner, commented he recommends requiring the AI program to list its sources. Users can also instruct the AI to limit its sources to those that meet certain criteria.
“If you don’t require it to verify the sources, it’ll give an opinion, which isn’t what I’m looking for,” Harrison mentioned.
Ultimately, there’s so much “context” and complexity relative to each individual’s financial situation that a human financial planner can tease out of their client, Harrison stated. Someone using AI won’t necessarily know that they’re uncovering all those subtleties in their prompts, he commented.
“Looking to [AI] for advice implies you are giving it enough information to form an opinion and create a recommendation, and that’s a step further than I’d go with AI,” he remarked.