Back
Canada's AIDA and Health Care AI Governance: Lessons from a Failed Regulatory Framework
webai.nejm.org·ai.nejm.org/doi/full/10.1056/AIpc2500153
Published in NEJM AI (June 2025), this paywalled article is relevant to AI governance researchers studying sector-specific regulation, particularly how general-purpose AI laws may inadequately address health care AI risks.
Metadata
Importance: 52/100journal articleanalysis
Summary
This NEJM AI article analyzes Canada's failed Artificial Intelligence and Data Act (AIDA), which was terminated with Parliament's prorogation in January 2025. It critiques AIDA's lack of specificity, underinclusiveness, and absence of sector-specific health care oversight, and proposes reforms for future AI legislation. The Canadian experience offers broader lessons for global AI regulation balancing innovation with patient safety.
Key Points
- •Canada's AIDA was terminated in January 2025 before enactment, leaving a regulatory gap for health care AI governance.
- •AIDA was criticized for being too broad, lacking sector-specific provisions for safety, bias, transparency, and patient privacy.
- •The article proposes targeted, sector-specific regulatory approaches as essential for safe AI integration in health care.
- •Canadian regulatory failure provides instructive lessons for global AI governance efforts, especially in high-stakes domains.
- •Balancing responsible innovation with patient safety requires more granular legislative frameworks than AIDA provided.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Artificial Intelligence and Data Act (AIDA) | Policy | 46.0 |
Cached Content Preview
HTTP 200Fetched Apr 7, 20267 KB
Lessons from the Failure of Canada’s Artificial Intelligence and Data Act | NEJM AI
May
JUN
Jul
21
2024
2025
2026
success
fail
About this capture
COLLECTED BY
Collection: Save Page Now Outlinks
TIMESTAMPS
The Wayback Machine - http://web.archive.org/web/20250621211821/https://ai.nejm.org/doi/full/10.1056/AIpc2500153
Skip to main content
The New England Journal of MedicineNEJM
NEJM Evidence
NEJM AI
NEJM Catalyst
NEJM Journal Watch
Sign In|Create Account
Subscribe
CURRENT ISSUE
View Current Issue
Browse All Issues
RECENTLY PUBLISHED
PODCAST
EVENTS
ABOUT
About NEJM AI
Editors & Publishers
Editorial Policies
Contact Us
AUTHOR CENTER
Advanced SearchSEARCH
NEJM AI mourns the passing of Atul Butte, MD, PhD, a pioneer in medical AI.
This article is available to subscribers. Subscribe now
Already a subscriber? Sign in
Save
Create an E-mail Alert for This Article
Policy Corner
Share on
Lessons from the Failure of Canada’s Artificial Intelligence and Data Act
Authors: Abdullah H. Ishaque, M.D., Ph.D. https://orcid.org/0000-0001-7938-9490, Abdi Aidid, J.D., L.L.M. https://orcid.org/0009-0003-2652-9690, and Gelareh Zadeh, M.D., Ph.D. https://orcid.org/0009-0009-2002-5313Author Info & Affiliations
Published June 18, 2025
DOI: 10.1056/AIpc2500153
Copyright © 2025
Permissions
For permission requests, please contact NEJM Reprints at [email protected]
Contents
Abstract
Notes
Supplementary Material
Information & Authors
Metrics & Citations
Get Access
References
Media
Tables
Share
Abstract
Canada’s initial attempt at AI governance, the Artificial Intelligence and Data Act (AIDA), was introduced within Bill C-27, but was ultimately terminated with the prorogation of Parliament in January 2025. AIDA sought to establish a risk-based regulatory framework; however, it was criticized for its lack of specificity, underinclusiveness, and absence of sector-specific oversight — issues that are particularly consequential for health care AI applications. The broad and generalized nature of AIDA left regulatory gaps concerning safety,
... (truncated, 7 KB total)Resource ID:
42365d7c4104a03d | Stable ID: sid_5NY6veEfuH