Skip to main content
Open this photo in gallery:

New Canadians take the oath of citizenship in Toronto, July 19, 2024.Chris Young/The Canadian Press

The Immigration Department is experimenting with an AI tool to advise newcomers where they would be best suited to settle in Canada, one of many ways it is increasingly using artificial intelligence, including to detect fraud.

Stanford University’s Geomatch algorithm is designed to predict the probability of success of new immigrants, including refugees, in locations within a destination country. It uses machine learning, including information on past immigrants and their experiences, along with the work history, education and personal characteristics of new arrivals.

Use of the algorithm by Immigration, Refugees and Citizenship Canada, in partnership with Stanford’s Immigration Policy Lab, was disclosed in the federal department’s newly published first AI strategy. The U.S. university declined to comment.

Immigrants are not obliged to follow the recommendation, which is “offered as handy information for them to consider,” the strategy says.

Immigration Department shelves visa program for foreign entrepreneurs, citing misuse

IRCC is experimenting with a number of AI tools, many of which focus on fraud prevention. The strategy document says artificial intelligence is helping the department detect false narratives. These could form the basis of an asylum claim or deflect attention from security concerns, for example.

Machine learning tools are also being used to detect anomalies in applications and irregular travel patterns, which could signal that a refugee or immigrant came from a country other than the one claimed.

AI systems have been trained to detect fraudulent manipulation of documents, such as academic records and bank statements, as well as artificially “morphed” photographs that could be used in an attempt to commit identity fraud or to mislead an immigration officer about a person’s age.

“IRCC uses a range of tools, including advanced analytics, to support officers in identifying potential fraud,” department spokesperson Isabelle Dubois said in an e-mail.

Last year, IRCC investigated an average of roughly 8,000 cases of suspected immigration fraud a month, and refused an average of 7,900 fraudulent applications a month, Ms. Dubois said. The proportion of visitor visas refused because of fraud rose to 7.1 per cent in 2025, up from 4.6 per cent in 2024.

Federal budget 2025: Foreigners living in Canada will get permanent residence priority, Immigration Minister says

In the new strategy, the Immigration Department says it is not looking to implement “fully autonomous AI” but is “ready to accelerate our adoption of AI systems that can drive efficiencies in routine tasks that otherwise use significant time and resources.”

Artificial intelligence is already being used to “triage” immigration applications and make assessments and recommendations, for example on whether an immigration officer should approve a visitor’s visa.

The department says it will continue to use AI to speed up decision-making by officials, such as by flagging “straightforward low-risk files.”

But IRCC does not use autonomous AI agents that can refuse client applications, the strategy says. And the department is avoiding “black box” AI models, where the system’s internal workings are a mystery and the logic used is opaque or inexplicable.

“Doing so would run counter to the administrative law principle that clients are entitled to a meaningful explanation of decisions and to a transparent appeals process,” it says.

The strategy refers to IRCC as “a leader in AI experimentation” at the forefront of integrating artificial intelligence into government operations. It also acknowledges that while AI has immense potential, the technology also poses risks, for example by perpetuating discrimination.

According to the strategy, the department strives to use diverse data to train AI systems in order to guard against potential bias. It is also aware of the importance of privacy controls, which include allowing AI systems to handle only the minimum personal information necessary, and regularly runs tests and audits.

Asylum rulings made without a hearing raise security and fraud concerns, C.D. Howe Institute report says

When developing AI systems, IRCC will integrate privacy measures including “trying to use anonymised or internally generated synthetic data wherever possible to minimize risks to privacy,” the strategy says. Where this is not possible, it will make sure that personal information entered into an AI system can be deleted or corrected.

Syed Hussan, executive director of the Migrant Workers Alliance for Change, urged caution in the use of AI because it often makes mistakes.

He said it was “bizarre” the department was using an AI tool to suggest where people should settle rather than relying on advice from communities, since immigration applications do not include granular information about a person’s likes and dislikes, such as whether they are interested in the arts.

He warned that officials may be inclined to apply less scrutiny to a file or check facts before making a decision if AI has been involved in the analysis, and that IRCC must ensure there are checks and balances in place.

“People are turning over their thinking to AI,” he said.

Artificial Intelligence Minister Evan Solomon is working on an AI strategy for the government, which is expected to include support for Canada’s sovereign AI industry and measures to protect children.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe