Home / Publications / Intelligence Memos / Sora is a Lesson on AI Innovation that Canada Needs to Avoid
- Intelligence Memos
- |
Sora is a Lesson on AI Innovation that Canada Needs to Avoid
Summary:
| Citation | Anindya Sen. 2025. "Sora is a Lesson on AI Innovation that Canada Needs to Avoid." Intelligence Memos. Toronto: C.D. Howe Institute. |
| Page Title: | Sora is a Lesson on AI Innovation that Canada Needs to Avoid – C.D. Howe Institute |
| Article Title: | Sora is a Lesson on AI Innovation that Canada Needs to Avoid |
| URL: | https://cdhowe.org/publication/sora-is-a-lesson-on-ai-innovation-that-canada-needs-to-avoid/ |
| Published Date: | November 14, 2025 |
| Accessed Date: | November 14, 2025 |
Outline
Outline
Files
From: Anindya Sen
To: Canadian AI watchers
Date: November 14, 2025
Re: Sora is a Lesson on AI Innovation that Canada Needs to Avoid
The government’s 30-day sprint to solicit submissions to its AI Strategy Task Force has ended and the new body will set out to define Canada’s approach to the use of artificial intelligence technologies.
But there is one important gap; there is no talk of safety standards or a new AI Act.
Existing topics of focus include research and talent; AI adoption across industry and governments; commercialization of AI; scaling Canadian AI champions and attracting investments; building safe AI systems and strengthening public trust in AI; education and skills; building enabling infrastructure; and security of the Canadian infrastructure and capacity. This is consistent with the Minister’s earlier emphasis on scaling up the AI industry, driving adoption along with ensuring trust and sovereignty over the technology.
But while the minister has mentioned his objective to table a new privacy bill, he has also said that it was important not to ‘over-index’ on regulation, hinting that onerous government stewardship would stifle innovation.
However, sensible legislation is critical to ensuring the type of innovation that Canadian entrepreneurs will engage in, as well as affecting public trust.
Take the example of Open AI’s video generation app Sora, which has gone viral in the United States, because of its ability to make extremely realistic videos, leading to the creation and sharing of faked situations of deceased celebrities and historical figures in strange and offensive scenarios.
On request from his estate, Open AI has put in guardrails to prevent the creation of videos containing the late Martin Luther King Jr. And bowing to the US actors’ union, Open AI has also agreed to crack down on deepfakes based on the likeness of actors. However, there is growing demand for OpenAI to withdraw Sora because of privacy and misinformation concerns.
It is astounding that Open AI appears to have not seen how its product could be used to produce videos depicting individuals in fictional scenarios that harm their reputation, or that infringe on intellectual property, as well as the ability to spread them exponentially.
Further, it had taken the ‘easy route’ of releasing the product with “opt-out” provisions – the right to ask Open AI not to use copyrighted figures or living individuals – as opposed to “opt-in” provisions that would require the company to obtain prior permission to use these materials. In the face of outcry, the company has shifted course and introduced new guardrails to prevent unauthorized use. But this is unlikely to be the end of the matter, especially given different interpretations – let alone different laws across countries – regarding what might be considered protected by free speech rights, such as satire or parody, which has been Open AI’s argument.
Sora can be used for beneficial purposes. For example, it is basically a free technology that could potentially be used to create information videos by resource constrained small entrepreneurs and community organizations. However, guardrails against the harmful use would have been needed, such as specifying that AI based products can not use the likeness of individuals without consent.
The legal principle of consent would minimize harm to individuals and efficiently minimize the massive amount of AI slop that has been generated in a short period of time.
While Ottawa’s proposed Artificial Intelligence and Data Act of 2022 had shortcomings, such as the definition of large-scale systems and of harm, the Sora saga demonstrated that it was correct in seeking to address the AI harms. The potential of Sora to spread harmful made-up scenarios, as if they were real, is significant. The potential for these types of harms should be addressed.
Based on the recommendations of the AI Task Force, Canada is about to engage on a path of significant AI innovation, which will surely be fueled by its existing strong AI ecosystem of world class university research, a highly skilled talent pool, and mentoring and other resources for start ups.
It is imperative that the federal government be clear on the type of widespread but responsible innovation that it sees taking hold in Canada, based on a framework that encourages the beneficial development and adoption use of AI, while being clear about the types of harm innovators should seek to avoid.
One thing seems certain: Something like a Canadian version of Sora, with the same flaws, would not make Canadians more likely to trust AI technologies, or more likely to help the Canadian brand and Canadian commercial success abroad.
Anindya Sen is Professor of Economics & Acting Director of the Cybersecurity and Privacy Institute at the University of Waterloo.
To send a comment or leave feedback, email us at blog@cdhowe.org.
The views expressed here are those of the author. The C.D. Howe Institute does not take corporate positions on policy matters.
Want more insights like this? Subscribe to our newsletter for the latest research and expert commentary.
Related Publications
- Opinions & Editorials
- Research
