
Ryan Appleby (right), a veterinary radiologist at the Ontario Veterinary College, warns of risks in relying on AI for diagnostics.Kenneth Chou
Ryan Appleby is concerned about the sudden pervasiveness of AI in veterinary diagnostics.
The veterinary radiologist and associate professor at the Ontario Veterinary College, who studies AI’s effect on the industry, says the technology is used by practitioners to assist with everything from notetaking and research to diagnostics and care recommendations.
“I actually think it’s creating a very dangerous situation for pet owners,” he says.
He explains that veterinarians used to send X-rays to trained radiologists to make a medical diagnosis, typically charging customers upwards of $100. Now many also provide pet owners the option to run it through an AI solution that can provide an instant diagnosis for about $10, making it a popular choice.
What many pet owners don’t realize, Mr. Appleby warns, is that there is often no human professional verifying those AI-generated results.
“There’s not enough information for veterinarians to really understand how to use the outputs of those systems, because the veterinarians using them are not radiologists,” he says. “So, there may be cases where it has false positives, false negatives, or even cases where it’s a true positive or true negative and still negatively impacts the care for that patient.”
Appleby explains that without a trained human radiologist reviewing the results the general practitioner could misinterpret them or recommend unnecessary and often costly additional testing.
“If it were my own animal, my family members’ or a close friend’s, I would not want them seeking out an AI interpretation,” Mr. Appleby says, warning against putting too much faith in results that aren’t reviewed by a human professional.
From veterinarians to accountants to lawyers, AI tools are rapidly transforming professional services in profound ways, increasing access to services from legal advice to financial planning and lowering costs.
According to a recent survey of legal professionals, 77 per cent use AI for document review, 74 per cent use it for legal research, 74 per cent use it to summarize documents and 59 per cent use it to draft briefs or memos.
While many, including Mr. Appleby, are excited by the technology’s potential, they’re also raising concerns that the technology is developing too fast for regulators to keep up, introducing new challenges around data privacy, security, disclosure and accuracy.
“AI has the potential to make law more accessible and help Canadians from all walks of life access legal services more cost effectively,” says Jonathan Griffith, the practice advisor and equity ombudsperson for the Law Society of Alberta. “The flip side of that is that it’s easy to take the benefits for granted and to overlook the risks.”
That concern inspired the Canadian Bar Association’s Ethics and Professional Responsibility Sub-Committee to publish a toolkit for Canadian legal practitioners.
Mr. Griffith, who chairs the committee, says there are no new laws that govern AI’s use by legal professionals, but there are existing rules and regulations that can be applied to the technology.
“It’s important to understand what AI can do and what AI can’t do,” he says. “AI can help analyze documents, it can help categorize, it can help summarize. But it can’t verify and it can’t audit its own accuracy. So lawyers need to be aware that there is a real risk for there to be inaccurate, misleading, false or even biased information that comes out of AI.”
Griffith also highlights the potential for violating existing rules pertaining to client confidentiality and cybersecurity, emphasizing that lawyers are advised not to input any sensitive information into AI tools such as ChatGPT.
In existing codes of conduct, Canadian law societies enforce obligations of candour with clients, which Mr. Griffith says can be interpreted to extend to the disclosure of when and how AI is being used.
“Some courts have released notices that require a professional to disclose the use of AI, and to talk about how it was used in generating the submissions that are being brought to court,” he says. “Lawyers are not obligated to become experts in AI; they just have to understand how to use it, and they have to understand their ethical obligations.”
A similar approach has also been adopted by Chartered Professional Accounts (CPA) Canada, which has published a thought-leadership series on AI in accounting but has similarly not adopted a formal policy.
“We put out a few papers in a series that we worked on together with the American Institute of CPAs,” says Melissa Robertson, senior manager of research and thought leadership for CPA Canada. “It is a thought leadership series - so it’s not necessarily regulation or guidance - exploring the impact of AI, and what you need to consider when it comes to risk management of AI systems, looking at a lot of the existing frameworks.”
As with the governing body for legal professionals in Canada, the country’s accounting industry regulator is optimistic about the opportunities AI provides to lower costs and increase access to services, while remaining cautious about issues such as accuracy, transparency, bias and data privacy.
“The onus is still on the user and not on the tool to ensure accuracy, ensure that the information is being used properly and that you’re not reaching incorrect conclusions,” Ms. Roberston says. “There are a lot of areas of opportunity for AI to be used by professionals, and I think that we need to approach it as not something to be scared of, but to think of how best we can use it.”