The Role of AI in Care Teams: To Complement, Not Substitute

Summary:
Citation Will Falk. 2025. "The Role of AI in Care Teams: To Complement, Not Substitute." Intelligence Memos. Toronto: C.D. Howe Institute.
Page Title: The Role of AI in Care Teams: To Complement, Not Substitute – C.D. Howe Institute
Article Title: The Role of AI in Care Teams: To Complement, Not Substitute
URL: https://cdhowe.org/publication/the-role-of-ai-in-care-teams-to-complement-not-substitute/
Published Date: September 17, 2025
Accessed Date: November 6, 2025

From: Will Falk
To: Healthcare AI Watchers
Date: September 17, 2025
Re: The Role of AI in Care Teams: To Complement, Not Substitute

We hit peak AI hype this summer when The Guardian quoted 2024 Nobel laureate Demis Hassabis: “It’ll be 10 times bigger than the Industrial Revolution – and maybe 10 times faster.”

I checked the quote using ChatGPT. The Industrial Revolution doubled per capita income over a century. Multiply that by 10 in both size and speed, and… you get the idea. Hype.

But it is moving fast, especially in healthcare.

The Economist reports global health internet searches dropped 31 percent since June 2024. That’s not a blip, it’s a signal. People are ditching traditional search engines for generative AI.

In healthcare, this shift means users now get plain-language, tailored answers at their comprehension level, in their preferred language, with visuals. Input a clinical note (or just a hazy memory of the appointment), and AI offers a reasonable, jargon-free explanation. It’s health literacy on demand and finally accessible for your grandparent, in his or her own language.

This is bigger than healthcare. Any complex area – cookbooks, user manuals, accounting rules, legal forms – is now being decoded in real time.

This is transformative. AI explanations are better, more accessible, and more usable than Dr. Google ever was.

Search and AI are merging and devouring the traditional internet search. Specialty search will be next.

Clinical decision support is evolving in parallel. AI tools are overtaking textbooks, guidelines, and Wikipedia. The tools don’t replace clinician responsibility; they just modernize the tools.

We didn’t regulate physicians’ internet use, and we’re not rushing to regulate AI references either.

Meanwhile, ambient AI scribes are spreading quickly. This summer, British Columbia and Infoway joined Ontario in backing scribes as a remedy for the administrative burden.

Ontario has built “safe harbours” for clinicians using approved products. That framework is now spreading to other providers and provinces.

In both scribe and search use cases, AI is a complement, not a substitute. Clinicians supervise it, just as they would a medical student, office assistant, or peer clinician.

And both tools are expanding fast. Scribes started as glorified dictation tools. They now gather histories, summarize charts, book appointments, fill forms, transmit referrals, and check formularies or social needs. Eventually, they’ll become full-fledged clinical copilots.

But they won’t replace clinicians anytime soon.

That’s why complementary adoption matters. If AI augments a clinician’s role, it fits under current scope-of-practice and licensing rules. This isn’t drug substitution, it’s team augmentation.

The right regulatory analogy might not be drugs or devices. Some have argued for drug-style approval, given generative AI’s black-box nature. But that sets a substitution-level bar. Others argue AI is a medical device – but high-grade device regulation requires reproducibility and schematics. That’s often incompatible with the nature of algorithms.

A better fit may be licensure, as with human clinicians. Could we license AI the way we do MDs, RNs, and other staff according to competency and supervision.

A “Medical Student AI” that gathers data and reviews history; a “Resident AI” that synthesizes information, offers care plans and manages them with supervisions; an “Attending AI” that makes diagnostic decisions and acts as a colleague.

This framework gives us flexibility while maintaining clinical accountability.

We can also extend privacy rules from the internet to public AI tools. Clinicians know they can’t upload patient information into random websites. The same goes for AI. With search and AI now merging, clarity is needed.

For now, using paid AI tools that disable data collection is a smart interim step. Institutions will pay for in-house AI but must keep pace with rapid external developments. In the short term, gold-standard clinical decision support platforms like OpenEvidence, Pathway.MD, and others will lead the way. These tools come with guardrails and a much clearer privacy posture.

Clinical decision support, search, and scribes should be managed as we manage clinical learners or office assistants. Ask: At what level are they functioning? Do they need supervision like an office assistant, a med student, resident, or fellow? Not: “Can this replace me?”

That framing – AI as human-like intelligence – prepares us for the next shift: Agentic resources.

We may soon need specialized, hospital departments for agentic oversight, HR for AI. It is becoming clear that most institutions will have thousands of agents.

Agentic resources will recruit, onboard, train, review, and terminate agents. Many will remain complements. Others may fully substitute for specific tasks, requiring credentialing and governance.

Providers may choose their agents like they choose their clinical reference tools. Microsoft’s Sequential Diagnosis with Language Models (Nori et al) describes a team of AIs in an orchestrated model: Dr. Hypothesis, Dr. Test-Chooser, Dr. Challenger, Dr. Stewardship, Dr. Checklist. You might customize your own panel.

You could let each “voice” weigh in or ask for a consolidated “Chief Resident” response. This last part is wildly speculative, but the Nori paper is exciting and one of several exciting clinical AI papers released this summer.

Recent reviews like Fahrner et al.’s The Generative Era of Medical AI suggest to me that every specialty will soon have its own multimodal AI support. Some will become patient-facing. As they grow in sophistication, they’ll raise new regulatory questions.

But for now, clinical decision support, search, and scribes are complements. Let’s regulate them accordingly.

Will Falk is Public Policy Fellow at the Canadian Standards Association, and a Senior Fellow at the C.D. Howe Institute.

To send a comment or leave feedback, email us at blog@cdhowe.org

The views expressed here are those of the author. The C.D. Howe Institute does not take corporate positions on policy matters.

Want more insights like this? Subscribe to our newsletter for the latest research and expert commentary.

Membership Application

Interested in becoming a Member of the C.D. Howe Institute? Please fill out the application form below and our team will be in touch with next steps. Note that Membership is subject to approval.

"*" indicates required fields

Please include a brief description, including why you’d like to become a Member.

Member Login

Not a Member yet? Visit our Membership page to learn more and apply.