The Timidity Danger in AI Regulation

Summary:
Citation Daniel Schwanen. 2025. "The Timidity Danger in AI Regulation." Intelligence Memos. Toronto: C.D. Howe Institute.
Page Title: The Timidity Danger in AI Regulation – C.D. Howe Institute
Article Title: The Timidity Danger in AI Regulation
URL: https://cdhowe.org/publication/the-timidity-danger-in-ai-regulation/
Published Date: December 17, 2025
Accessed Date: January 22, 2026

From: Daniel Schwanen
To: Artificial Intelligence watchers
Date: December 17, 2025
Re: The Timidity Danger in AI Regulation

Ottawa’s forthcoming AI strategy needs to walk a tightrope between two equally important principles: Safeguarding Canadians from possible misuses of AI but also giving our private and academic sectors the leeway to use Canada’s AI strengths to develop and commercialize new technologies and products.

Commercial success – including adoption by both the private and public sectors here at home – can help AI generate new opportunities for businesses, raise Canada’s dismal productivity performance and lift Canadians’ future incomes.

Canada has already made significant and growing investments in AI capacity. But nurturing talent, performing R&D and even owning the intellectual property produced, while necessary, are not sufficient to spur growth. Growth does not come just from accumulating its ingredients. In fact, causation runs the other way: Talent and IP will ultimately flow to where they are best able to explore and benefit from growth opportunities.

Canada’s strategy needs to begin with the understanding that, even in this era of re-emerging industrial policy, profit-driven private-sector growth, far from being essentially driven by irresponsible or harmful actors, is crucial.

The White House’s AI Action Plan, the British prime minister’s recent response to that country’s nuclear industry review and, to a certain extent, Canada’s “energy superpower” vision all accept this point. Governments can open doors, foster strategic initiatives and partnerships, and even in some cases directly support initiatives that can build capacity and unlock national comparative advantages. But in all cases, this is also accompanied by removal of unnecessarily burdensome regulations to allow the private sector to thrive.

The role of AI regulators should be to help prevent clearly spelled out harms to privacy, reputation, competition and security writ large. They can do this using a principles-based approach, like Canada’s banking regulation, rather than an overly prescriptive precautionary approach, which presumes to know who will cause which harms how.

Regulators need to track the capacity of actors using AI systems to do harm. Specific new tools should be adopted as necessary to enforce prohibitions against such harms – whether caused by deepfakes or by AI systems colluding to fix prices or rig public sector bids.

But because AI’s evolution is constant and rapid and its greatest advances are probably ahead of us, what makes most sense is to embed the principles of responsible use right from the start, while allowing room for experimentation and development of new products and business models.

How we protect troves of personal data collected from Canadians by public entities, while also making it available, once de-identified, to public and private sector researchers, can illustrate this approach. Such data is the raw material of AI, and AI that incorporates this Canadian content can spur useful innovation in our domestic firms and institutions and even constitute an advantage in global markets.

Data sovereignty is often taken to mean keeping complete control of our data. But it should also be about giving Canadians access to the data they need. Careful protocols could ensure its safe, de-identified sharing.

More broadly, Canadian regulators need to avoid being bound by – and binding Canadians with them – to the “deadweight of dogma,” former Privy Council Clerk Michael Wernick’s phrase as he lamented the prioritization of preventing everything that can go wrong rather than facilitating what might be successful. 

In aiming for the right balance, Canada’s AI strategy should lean more toward making room for experimentation and rewarding success, and less toward ill-targeted pre-emptive rules and regulations.

Daniel Schwanen is senior vice-president of the C.D. Howe Institute.

To send a comment or leave feedback, email us at blog@cdhowe.org.

The views expressed here are those of the author. The C.D. Howe Institute does not take corporate positions on policy matters.

A version of this Memo first appeared in the Financial Post.

Want more insights like this? Subscribe to our newsletter for the latest research and expert commentary.

Membership Application

Interested in becoming a Member of the C.D. Howe Institute? Please fill out the application form below and our team will be in touch with next steps. Note that Membership is subject to approval.

"*" indicates required fields

Please include a brief description, including why you’d like to become a Member.

Member Login

Not a Member yet? Visit our Membership page to learn more and apply.