
As fraud attempts become more difficult for advisors to spot, relying on a client's personal details is one solution.Rafa Jodar/iStockPhoto / Getty Images
Attempts to defraud investors are becoming more sophisticated with the adoption of artificial intelligence. When the fraudster takes on the writing style, voice or even the image of a client, how can advisors differentiate a real request from a scam?
Leaning into personal connections with clients has proven to be an effective defence, advisors say, but more controls and processes may be needed as scams evolve.
Almost all Canadians are seeing more targeted and sophisticated scams, according to a poll from Royal Bank of Canada earlier this year, with 86 per cent of those surveyed saying they believe it’s getting harder to recognize scams and protect themselves. About two-thirds noted a rise in deepfake scams.
This technology is advancing quickly, and it has become almost impossible to spot a fake, says Larry Zelvin, executive vice-president and head of Bank of Montreal’s financial crimes unit in New York.
AI has reached the point at which most people can’t tell whether they’re having a conversation with a real person or with AI, says Mr. Zelvin, a former director of the National Cybersecurity and Communications Integration Center for the U.S. Department of Homeland Security.
Sophisticated fraud attempts are not new, says Carlo Cansino, senior financial advisor with The McClelland Financial Group at Assante Wealth Management in Markham, Ont.
Several years ago, he says, his team was exposed to a plausible, personalized fraudulent e-mail from someone posing as a client after the client’s e-mails were hacked.
Generative AI is only going to make this easier for fraudsters to do, he says.
“They’re just using the AI to monitor e-mails going in and out of an inbox to understand the language and the grammar and the style of writing, so it’s even harder to detect,” he says.
Get personal
Even with fraud attempts becoming more difficult for advisors to spot, there are several red flags to look out for with these types of requests.
With phone calls or video meetings, a sign of spoofing is a lack of small talk, Mr. Zelvin says. AI-generated identities will try to shift the conversation away from personal topics toward what they can speak about.
This recently happened to a BMO employee, who received a call from a deepfake who sounded just like a trusted business contact phoning from a new number. When the employee asked about a personal matter, Mr. Zelvin says, the conversation was unusual and rushed.
When the employee realized they may not be talking to the right person, they ended the call and phoned back using the number on file.
“[The contact said], ‘No, we weren’t just talking – what are you talking about?’" Mr. Zelvin says. “That was the chill-down-the-spine moment.”
Many advisors are using their personal relationship with clients to verify the legitimacy of requests.
Darren Coleman, senior portfolio manager with Portage Cross Border Wealth Management at Raymond James Ltd. in Oakville, Ont., says keeping clients safe begins with comprehensive notes and knowing the client well enough to ask specific questions to validate their identity. This could be a phrase or a question that only the client can answer.
For example, Mr. Coleman says, it could be something such as, “In our last conversation, you told me about the cruise you took. Remind me where it was again?’”
Mr. Zelvin agrees: “What’s old is new again. You may need to create passwords or passcodes to authenticate that you are who you are.”
AI versus AI
Advisors must continue to educate clients on the need to be vigilant when sharing their information and to stay on top of evolving methods of fraud, Mr. Cansino says.
Technology may also play a role. While his team hasn’t yet used AI tools to combat fraud attempts, Mr. Cansino says this could be on the horizon.
“That could be the next phase of protecting ourselves and our clients,” he says.
Mr. Zelvin notes that one of the key solutions to the AI problem is AI itself. For example, BMO uses a technology in its call centres to identify if a call is AI-generated or if the voice is that of a known fraudster.
This tool, he says, scales well in those environments, but not yet to individual cell phones or video meeting apps. “It’s close, but it’s not there yet,” he says.