The D.B. Weldon Library at Western University in London, Ont.Scott Norsworthy/Scott Norsworthy
Vivek Goel is the president and vice-chancellor of the University of Waterloo. Mark Daley is the chief artificial intelligence officer and a professor in the department of computer science at Western University. They are co-chairs of the Council of Ontario Universities working group on AI.
Canada has a habit of inventing the future and then watching the rest of the world profit from it.
We did it with vaccines, letting our domestic manufacturing capacity erode, which caught us off guard in the pandemic. We did it with automobiles, ceding world-class production to global supply chains. Now we risk doing it again with artificial intelligence.
This isn’t for a lack of early vision. Decades of federal investment through agencies such as the Natural Sciences and Engineering Research Council of Canada and the Canadian Institute for Advanced Research nurtured many of the creators of the AI revolution, including Turing Award laureates Geoffrey Hinton, Yoshua Bengio and Rich Sutton. Their students now lead AI labs from Silicon Valley to Shenzhen. Cohere, a company founded by Canadian Aidan Gomez that builds AI solutions, proves Canadian companies can compete globally.
But even with Cohere’s success, we lack a truly public option – a foundation model built by and for Canadians. A foundation model is a large, general-purpose AI system trained on massive datasets. It serves as a base layer of intelligence that other applications build on. Models such as OpenAI’s ChatGPT and Google’s Gemini are examples, but they’re developed by private companies with little transparency or public oversight. A homegrown foundation model developed through public-private collaboration could serve Canadian interests rather than investors alone.
Canadian tech companies and policy-makers carve a path for data sovereignty
Why does this matter? Because AI sovereignty is more than just data centres and datasets. Neural networks are not engineered, they are grown. This distinction matters more than most policy makers realize. While we speak of “building” AI systems, the reality is far stranger: We cultivate them through iterative processes that resemble viticulture more than civil engineering. The implications for Canadian AI sovereignty are significant.
Fundamentally it’s about developing talent. Training large foundation models requires the intuitive know-how that can only be acquired by experience. Knowing how to curate training data or how to spot the difference between imminent collapse versus temporary turbulence are skills that require doing. It’s like winemaking: You can read all the books, but you can’t make great wine until you’ve tended vines through frost and drought.
What if we brought together Canadian businesses, government and academia to create a made-in-Canada foundation model? Not because we think we can compete directly with ChatGPT, but because the process itself would cultivate Canadian AI practitioners with genuine expertise and enhance AI education.
Today, students mostly learn on toy models that differ from large-scale production systems in subtle, but critical, ways. It’s like training surgeons on mannequins but never letting them near an operating room. A national model program would provide that missing experience.
European projects offer a powerful precedent. In November, the German federally-funded Sovereign Open Source Foundation Models was launched. The project aims to develop an open AI language model that falls in line with European regulatory requirements.
Canada wants to excel in AI, but its funding model is frustrating tech leaders
In September, Switzerland launched Apertus, the world’s first large-scale open multilingual model developed collaboratively by EPFL, ETH Zurich and their national supercomputing centre. They understand that sovereignty in the AI age doesn’t mean building the biggest model. It means possessing the capability to build a model when your national interests demand it.
Canada can do the same, and the benefits would extend beyond technical capacity. A bilingual, Canadian foundation model, trained on Canadian data and reflecting Canadian values, would support AI safety research, power domestic innovation and serve as training ground for the next generation. We don’t need to chase the frontier to reap rewards. A relatively modest model from Google recently helped identify novel cancer therapy pathways. Bigger isn’t always better.
Foundation models are critical infrastructure for the 21st century. They will shape everything from health care diagnostics to how citizens interact with government services.
This isn’t just about technological nationalism. It’s recognition that in an era where AI increasingly mediates human knowledge and decision-making, the ability to grow our own systems becomes a matter of democratic self-determination. The Swiss understood this. Singapore is following suit. Canada has the opportunity to do the same.
The investment required is substantial but not prohibitive. We don’t need OpenAI’s billions. We need what aerospace engineers call “minimum viable capability,” enough expertise and infrastructure to understand what we’re buying, adapt what we need and build what we must. The alternative is dependence on corporate and foreign priorities.
Editor’s note: This article has been updated to correct the name of the Natural Sciences and Engineering Research Council of Canada.