Skip to main content
opinion
Open this photo in gallery:

A police officer carries paperwork into the Palais de Justice, Quebec Superior Court, in Montreal in August, 2025.Christinne Muschi/The Canadian Press

Robert Diab is a law professor at Thompson Rivers University.

Canada may have its first case involving a judge who relied on artificial intelligence to write a decision.

The issue has surfaced in a complex commercial fraud case in Quebec Superior Court that resulted in a judgment of more than $120-million. The defendants have appealed, alleging that the ruling contains hallmarks of AI use, including citations to earlier cases that do not exist and verbatim quotations from testimony that was never given.

The allegation that Justice Jocelyn Geoffroy used AI has not been confirmed, and he has declined to comment. But as La Presse, which broke the story, reported, the fake cases cited in the decision cannot be found in any of the parties’ submissions.

While this may be the first Canadian appeal involving a judge’s use of AI, it likely won’t be the last.

The larger question is whether a judge’s reliance on AI renders a trial unfair. The stakes are high. If AI impairs a judge’s ability to be fair and impartial – or appear to be so – using it will erode trust in the courts. But does that mean courts have to forego the potential benefits of AI altogether?

The Quebec case helps clarify the limits of acceptable AI use by judges.

Robert Diab: AI writing just isn’t good enough – and if you’re using it, everyone can already tell

If a judge relies on AI that hallucinates legal authorities or invents evidence, the response is obvious. A decision based on the wrong law or facts cannot stand. The defendants in the Quebec case are making both claims. They say Mr. Geoffroy relied heavily on a fabricated case to support a rule of law that does not exist and mischaracterized other authorities as establishing a “clear legal framework.” The appeal also alleges the decision relied on testimony that was never given. If those claims are correct, the judgment would be difficult to defend on appeal.

The harder question arises when AI does not hallucinate. What if a judge uses AI simply to summarize the law or the evidence in a case? Would that undermine trial fairness?

Our courts have dealt with a similar issue before. In 2013, the Supreme Court of Canada held that a decision can still be fair even if a judge copies substantial portions of a party’s written argument into the judgment. Trial decisions carry a presumption of integrity. That presumption is displaced only if a reasonable observer would conclude the judge failed to independently consider the evidence and the issues.

The takeaway seems clear. The issue is not whose words appear in a judgment but whether the judge actually made the decision.

Some scholars say the same reasoning should apply to AI. Years before ChatGPT appeared, law professor Eugene Volokh argued that what ultimately matters is whether the reasons supporting a decision are cogent and persuasive. AI may be biased and its decision-making processes opaque, but the same is true of humans. Judges persuade us they reached the right result by explaining their reasoning. If AI produces sound reasoning and a judge adopts it after independent review, the outcome might still be fair.

Yet there is an important difference. Our idea of a fair trial assumes being heard by a human being – or sometimes a jury of our peers – who deliberates before reaching a decision. AI does not deliberate. It predicts the next word in a sentence based on probabilities. Its output may sound convincing, but decisions guided by that reasoning might still seem arbitrary, more a product of computation than judgment.

The difference matters for public confidence in the courts. If AI begins to shape the reasoning behind judicial decisions, litigants may reasonably wonder whether the outcome in their cases depended on a judge’s careful consideration or simply the words he happened to use when prompting AI.

The line that courts will need to draw is one between assistance and decision-making. But as the Quebec case suggests, using AI without being vigilant can easily result in that line being crossed. In a decision over 200 pages, perhaps Mr. Geoffroy overlooked a law clerk’s overreliance on AI. But if that is what occurred, it’s a mistake that matters.

And who pays for that mistake? The trial in the Quebec case lasted 36 days and may have to be done over again. Should the government pay for the first one?

Efficiency has its place in the justice system. But judgment is the essence of judging. Courts may adopt AI as a tool, but they cannot allow it to replace the human deliberation that makes a trial fair. Getting this wrong will be costly for us all.

Editor’s note: This article has been updated to correct secondary references to Justice Jocelyn Geoffroy, who is male.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe