UC Berkeley Researchers Propose Policy Options for FDA To Address Bias In AI-driven Medical Devices

Researchers at the University of California, Berkeley have released a policy brief outlining options for the US Food and Drug Administration (FDA) to address bias in artificial intelligence (AI) driven medical devices. With the use of AI in healthcare on the rise, it’s essential that the FDA regulates AI-driven Software as a Medical Device (SaMDs) to ensure they don’t amplify existing health disparities between different racial and ethnic groups.

The policy brief describes the current regulatory framework for AI-driven SaMDs in the US. Depending on their classification and risk level, AI-driven SaMDs are subject to varying degrees of regulatory scrutiny. A 510(k) evaluation is required for Class I and II devices, which pose little to moderate risks to users, in order to confirm that they are equally safe, reliable, and effective as an existing device. New Class I-II SaMDs that are not “substantially equivalent” to a predicate device may go through the De Novo Classification Process. Class III devices, which are high-risk and important in preventing impairment of health, undergo the full premarket approval process of scientific and regulatory review to prove that the benefits of the device outweigh the risks.

Despite these regulatory procedures, the policy brief points out that there is no US law that specifically addresses AI in healthcare, and there is a chance that the AI-driven SaMD FDA evaluation will be inconsistent. The lack of regulation to prevent bias in AI is also a key issue. While the FDA, Department of Health and Human Services, and World Health Organization all recommend that AI in healthcare be regulated to ensure equity, they have not yet implemented explicit standards to accomplish this goal. The Federal Trade Commission has provided specific recommendations such as testing algorithms before and after approval to ensure they don’t introduce new or worsen existing health inequities, but more needs to be done.

To address these issues, the policy brief proposes three policy options for the FDA to consider:

Option 1: Create a new and separate regulatory process for AI-driven SaMDs. The proposed process would include a panel of experts in algorithmic justice and healthcare equity who would develop benchmarks and requirements for the investigation of bias at every development stage. The panel would examine each AI-driven SaMD function to make sure there is little danger that it could exacerbate already-existing health inequities.

Option 2: Amend the FDA Total Product Life Cycle regulatory approach to include equity validation standards. Equity validation should include assessments that ensure SaMDs effectively and accurately achieve their intended outcomes and evidence that SaMDs produce the intended outcomes equitably across populations. High-risk AI SaMDs should undergo prospective randomized trials that demonstrate the medical device does not replicate existing biases.

Option 3: Expand the scope of review within existing regulatory pathways. This would involve lowering the thresholds for FDA review so that more devices undergo thorough De Novo or premarket approval. The equivalency standard for 510(k) review and the criteria for exemption would be narrowed.

The researchers suggest that the FDA Commissioner create a special regulatory process for AI-driven SaMDs. This recommendation is based on the understanding that such a policy could prevent the approval of AI medical devices that could lead to or worsen existing health disparities. Additionally, this process is seen as a way of building trust with the public, while also taking advantage of the current technological transformation to craft a better future. The researchers note that these devices have the potential to reduce healthcare biases if they are regulated correctly, which would align with the Commissioner’s mission to protect and promote public health.

Tags

Stan Martin

Stan Martin

Stan Martin is a journalist writing about all aspects of the healthcare sector. Stan's reporting spans a wide array of topics within healthcare, from medical advancements and health policy to patient care and the economic aspects of the healthcare industry. Stan has contributed hundreds of news articles to Healthcare IT Journal, demonstrating a commitment to delivering factual, comprehensive news.

Get The FREE HIPAA Checklist

Discover everything you need to become HIPAA compliant
Scroll to Top

Get the free newsletter

Discover everything you need to become HIPAA compliant
Please enable JavaScript in your browser to complete this form.
Name

Get The FREE HIPAA Checklist

Discover everything you need to become HIPAA compliant
Please enable JavaScript in your browser to complete this form.
Name