AI in the clinical lab: what are the regulatory implications?

BulletArticle
Поделитесь этим:
AI regulation in the clinical lab: what are the regulatory implications?

Since 2018, FDA approvals for artificial intelligence (AI)-based medical devices have been on the rise. This includes a growing array of AI algorithms that may be used in clinical labs or leverage clinical lab data.

AI attracts regulatory scrutiny due to concerns around transparency and explainability associated with the technology. However, not every healthcare product using AI has a medical purpose or necessarily poses a high risk to patients. A risk-based approach to regulation is necessary to ensure that these products, which can help save lives and improve patient outcomes, are used safely and appropriately.

The three types of AI

AI-based algorithms can be broadly divided into three categories: assistive, automated, and autonomous, based on the level of human involvement in a healthcare decision.

  1. Assistive AI – supports the healthcare professional (HCP) to inform or drive clinical management
  2. Automated AI – treats or diagnoses the patient with the possibility of the HCP approving or overriding the AI’s decision
  3. Autonomous AI – acts completely independently to treat or diagnose the patient without a HCP’s review

As AI-based products become more autonomous and are applied to more serious health conditions, they pose greater risk to patient safety and thus require a higher level of trust, according to the International Medical Device Regulators Forum (IMDRF). For those products that meet the definition of a medical device, more oversight and regulation may be needed as the risk categorisation is higher.

IMDRF risk categorisation of AI-based digital health products

 

Risk categorisation and qualification of AI-based products with medical purpose are key policy areas in digital health regulation. As such, guidelines and practices are changing constantly and must be carefully tracked.

Three examples from the clinical lab

Many products using AI in the clinical lab today do not qualify as a medical device and therefore should not be regulated as such. These include tools for workflow optimisation and quality management, as well as low-risk clinical decision support (CDS) applications.

A growing number of clinical labs, for example, are adopting remote service models with their vendors to enable more proactive monitoring and support of IVD instruments. Operational data is shared through online connectivity and AI may be used to further automate some of these processes and better predict maintenance requirements. The AI in these platforms, however, would not qualify as a regulated medical device.

As another example, some healthcare organisations in the Asia Pacific region use software platforms that leverage AI-based algorithms to optimise the workflow of multidisciplinary teams. These algorithms are typically assistive in nature and thus relatively low risk, so they are unlikely to command significant regulatory oversight (note: any modules that qualify as a medical device and pose a higher risk to patient safety should be evaluated independently).

Regulatory requirements of AI in clinical labs 

 

At the higher end of the risk spectrum, a growing number of healthcare technology firms are developing AI-based algorithms that combine clinical laboratory data with other factors, such as imaging or clinical information, to support patient care (see, for example, this recent case study on the use of AI to expand the clinical utility of serum tumour markers in China). Such algorithms can directly diagnose and triage patients, and thus may be subject to greater oversight and regulation.

Although the regulation of health AI is still in its infancy, transparency and explainability are two recurring principles among current frameworks in Asia Pacific, Europe and the United States. When algorithms can make decisions on their own, it is important that humans can clearly understand how an AI algorithm arrived at a particular decision. Besides building trust, AI that is transparent in its operations enables users to fix problems in the model and continually improve its capability.

Key Takeaways

  • Regulation of AI in digital health is determined by its qualification as a medical device and level of risk to the patient
  • Data aggregation tools and remote instrument maintenance are examples of low-risk AI-based products in the clinical lab that do not require regulation
  • AI in healthcare should be explainable and transparent to gain trust from its users
Поделитесь этим:

Другие материалы по этой же теме

Рекомендуемые темы

SequencingRED 2020Rare Diseases
Читать далее
Scroll to Top