Canada’s AI strategy needs to avoid excessive precaution

Summary:
Citation Daniel Schwanen. 2025. "Canada’s AI strategy needs to avoid excessive precaution." Opinions & Editorials. Toronto: C.D. Howe Institute.
Page Title: Canada's AI strategy needs to avoid excessive precaution – C.D. Howe Institute
Article Title: Canada’s AI strategy needs to avoid excessive precaution
URL: https://cdhowe.org/publication/canadas-ai-strategy-needs-to-avoid-excessive-precaution/
Published Date: December 9, 2025
Accessed Date: January 22, 2026

Published in Financial Post.

Ottawa’s forthcoming AI strategy needs to walk a tightrope between two equally important principles: safeguarding Canadians from possible misuses of AI but also giving our private and academic sectors the leeway to use Canada’s AI strengths to develop and commercialize new technologies and products. Commercial success — including adoption by both the private and public sectors here at home — can help AI generate new opportunities for businesses, raise Canada’s dismal productivity performance and lift Canadians’ future incomes.

Canada has already made significant and growing investments in AI capacity. But nurturing talent, performing R&D and even owning the IP the R&D produces, while necessary, are not sufficient to spur growth. Growth does not come just from accumulating its ingredients. In fact, the causation runs the other way: talent and IP will ultimately flow to where they are best able to explore and benefit from growth opportunities.

Canada’s strategy needs to begin with the understanding that, even in this era of re-emerging industrial policy, profit-driven private-sector growth, far from being driven by irresponsible or harmful actors, is crucial.

The White House’s AI Action Plan, the British prime minister’s recent response to that country’s nuclear industry review and, to a certain extent, Canada’s “energy superpower” vision all accept this point. Governments can open doors, foster strategic initiatives and partnerships, and even in some cases directly support initiatives that can build capacity and unlock their countries’ comparative advantages. But in all cases, this is also accompanied by removal of unnecessarily burdensome regulations to allow the private sector to thrive.

The role of AI regulators should be to help prevent clearly spelled out harm to privacy, reputation, competition and security writ large. They can do this using a principles-based approach, like Canada’s banking regulation, rather than an overly prescriptive precautionary approach, which presumes to know who will cause which harms how.

Regulators need to track the capacity of actors using AI systems to do harm. Specific new tools should be adopted as necessary to enforce prohibitions against such harms — whether caused by deepfakes or by AI systems colluding to fix prices or rig public sector bids.

But because AI’s evolution is constant and rapid and its greatest advances are probably ahead of us, what makes most sense is to embed the principles of responsible use right from the start, while allowing room for experimentation and development of new products and business models.

How we protect troves of personal data collected from Canadians by public entities, while also making it available, once de-identified, to public and private sector researchers, can illustrate this approach. Such data is the raw material of AI, which likely will have trouble proceeding without it. Careful protocols could ensure its safe, de-identified sharing.

Data sovereignty is often taken to mean keeping complete control of our data. But it should also be about giving Canadians access to the data they need in order to thrive, helping to differentiate AI models trained in part with uniquely Canadian data from others, a potential competitive advantage in global markets.

More broadly, Canadian regulators need to avoid being bound by — and binding Canadians to — what in a recent paper for the C.D. Howe Institute, former clerk of the Privy Council Michael Wernick calls the “deadweight of dogma” — prioritizing the prevention of everything that could go wrong rather than facilitating what might be successful.

In aiming for the right balance, Canada’s AI strategy should lean more toward making room for experimentation and rewarding success, and less toward ill-targeted pre-emptive rules and regulations.

Daniel Schwanen is senior vice-president of the C.D. Howe Institute.

Membership Application

Interested in becoming a Member of the C.D. Howe Institute? Please fill out the application form below and our team will be in touch with next steps. Note that Membership is subject to approval.

"*" indicates required fields

Please include a brief description, including why you’d like to become a Member.

Member Login

Not a Member yet? Visit our Membership page to learn more and apply.