WASHINGTON, May 8, 2024 — The Food and Drug Administration may need a radical overhaul in how it evaluates medical artificial intelligence, according to a new proposal. A team of researchers argues the current framework isn’t equipped to handle the rapidly evolving technology, and suggests a pathway that mirrors the rigorous training and licensing required of physicians.
A New Framework for Intelligent Tools
Table of Contents
The proposal calls for a more dynamic and ongoing evaluation of medical AI, similar to continuing medical education.
KEY TAKEAWAYS
- The current FDA approval process may not be suitable for rapidly evolving medical AI.
- Researchers propose an approval pathway that parallels physician training and licensing.
- Continuous monitoring and evaluation of AI performance are crucial.
- The proposal, published in JAMA Internal Medicine, aims to ensure patient safety and efficacy.
The suggestion, detailed in the May 8 issue of JAMA Internal Medicine, centers on the idea that medical AI isn’t a static product, but a continuously learning system. Therefore, a one-time approval, like that given to traditional medical devices, isn’t sufficient. The FDA needs a system that accounts for ongoing performance and adaptation. A key aspect of this proposal is establishing a pathway for medical AI tools that closely resembles how physicians are trained, assessed, and continuously monitored throughout their careers.
The Limitations of Current Regulations
Existing FDA regulations primarily focus on pre-market approval, assessing a device’s safety and effectiveness at a specific point in time. However, medical AI algorithms can change and improve—or even degrade—after deployment as they are exposed to new data. This creates a challenge for regulators who need to ensure ongoing safety and efficacy.
The researchers propose a multi-stage approval process. Initial approval could be granted based on performance in controlled trials, similar to current practices. However, this would be followed by a period of post-market surveillance and continuous learning, with regular evaluations to ensure the AI continues to perform as expected. This ongoing assessment could involve real-world data analysis, audits of the AI’s decision-making process, and even “re-licensing” requirements if significant changes are made to the algorithm.
A Parallel to Physician Licensing
The analogy to physician licensing is deliberate. Doctors undergo years of training, pass rigorous exams, and are then granted a license to practice. However, their license isn’t a one-time event. They are required to participate in continuing medical education, maintain board certification, and adhere to ethical standards throughout their careers. The researchers suggest a similar model for medical AI, with ongoing monitoring and evaluation to ensure it remains safe and effective.
The proposal acknowledges the complexities of implementing such a system. It would require significant investment in infrastructure, expertise, and regulatory oversight. However, the researchers argue that the potential benefits—improved patient safety, more effective treatments, and greater trust in medical AI—outweigh the costs. The conversation around regulating medical AI is just beginning, but this proposal offers a compelling framework for ensuring that these powerful tools are used responsibly and ethically.
