InBrief

The evolving AI regulatory landscape and how healthcare companies can prepare

MedTech AI innovations bring with it regulatory hurdles that need addressing

The evolving AI regulatory landscape and how healthcare companies can prepare

The regulation of AI-powered devices in the United State has, until recently, not departed from the medical device status quo. Since 2015, the FDA approved more than 500 AI-enabled medical devices, the vast majority reviewed through the standard medical device path of 510(k). 

In 2019, however, the FDA released a discussion paper, “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD),” which began to lay a future regulatory foundation for AI-enabled medical devices centered on risk-management practices, predetermined change control plans, and continuous performance reporting. 

The paper proposes four overarching principles for regulating SaMDs:


  1. An assessment of the culture and quality of an organization through analysis of their development, testing, and monitoring practices  
  2. Premarket assurance of safety and effectiveness  
  3. Review of SaMDs “pre-specifications” and “Algorithm Change Protocol” (collectively, a “predetermined change control plan”) 
  4. Real-world performance monitoring

To advance the first principle, the FDA announced efforts to develop Good Machine-Learning Practices (GMLP), for which it published 10 guiding principles in 2021. Although details of GMLP remain in development, the emphasis on market-standard software-engineering, controlled model training, robust testing, and risk management practices is clear. 

To address these coming regulatory standards, companies can begin planning and building their own governance and risk management systems, which can prepare and even serve as models for coming FDA standards. 

Although premarket assurance of safety and effectiveness is nothing new to medical device regulation, pre-specifications and Algorithm Change Protocol (ACP) are. Pre-specifications include descriptions of the trajectory for how the SaMD will evolve over time given updates to its algorithm, and the ACP asks for disclosure describing how the algorithm will modify itself. 

Companies can prepare for this change by establishing their own high-level predictions for how their tools will evolve and can further improve their products by explicitly documenting their desired evolution in the design phase. 

Lastly, the FDA will monitor performance over time as this technology changes. To prepare—and to practice responsible monitoring—healthcare companies should define valid key performance indicators (KPIs) for their tools in the design phase and establish benchmarks by which to compare its performance. Although the particulars of performance reporting have yet to be finalized, leadership should form the habit of regular reporting of device KPIs to their governance committee to demonstrate GMLP and improve its assurance of the safety and efficacy of its products.  

Regarding product development, some of the principles that apply to current software best practices also apply in the ML space. For example, “Good Software Engineering and Security Practices” explicitly reference software development “fundamentals” such as data management, data quality assurance, and cybersecurity. 

But GMLP provides some unique considerations for product development. For example, principle 9 states that ML tools must ensure that “Users Are Provided Clear, Essential Information,” by which it means that developers must define the product’s intended use for health care, the performance of the model for targeted groups, types of data used to train and test the model, and limitation disclosures. 

In this case, developers will need to answer questions of these topics in their product developer workflow and consult targeted users on what and how to disclose.

Explore our latest perspectives