AI and medical devices – Our experts answer three common questions

Artificial Intelligence (AI) refers to the application of a broad range of computing techniques, from neural networks to machine learning, which are designed to mimic human behaviour. From speech recognition to the analysis of large volumes of data in order to provide certain outputs based upon their programmed algorithms, AI systems are looking to use advances in computing power and speed to supplement human skills.

09.03.2021

You can view this article in PDF format here.

In the context of medical devices, AI is being used to gather and analyse vast quantities of data to provide outputs to clinicians and assist them in the medical diagnosis process and provide better care to patients. This does raise the following question: Does the existing medical device legislative framework have enough flexibility within it to accommodate the increased use of this rapidly developing technology in the day-to-day care of patients?

A recent study published in the Lancet[1] revealed some statistics behind the recent surge in the number of approved devices using AI or machine learning technology. This study reported that in the last five years, 240 devices received regulatory approval in Europe, slightly more than the number approved in the US. For various reasons, in our view, this is a significant underestimate of the true position. The number of devices approved in Europe each year has been approximately doubling since 2017.

The European Commission has issued several publications on AI and strongly advocates the use of a risk-based approach in its White Paper On Artificial Intelligence[2]. In this Paper, and the related Report on the safety and liability implications of AI[3], the Commission discusses the challenges that some of the characteristic of AI will bring to both EU and national liability frameworks and the likelihood that the effectiveness of these regimes will be reduced.

AI technology encompasses characteristics including autonomy, data dependency and opacity, each of which give rise to certain issues.

Autonomy – AI technologies make decisions independently of the initial coding embedded by the manufacturer. Algorithms are given the scope to adapt and learn. Issues: AI based, unintended outcomes could cause harm. There may be situations where the outcomes of AI systems cannot be fully determined in advance or adequately identified and managed in post-market surveillance.
Data Dependency – AI systems and algorithms are commonly trained using sets of data. As a result, their operation is dependent on the datasets used to train the system to make decisions. Two categories of dataset exist: “wet” – those which are changing while the system operates and “dry” – closed sets of data that do not change once the system is in operation. Issue: deficiencies in the data used to train the system could result in the AI making autonomous and harmful decisions.
Opacity – The decisions made by AI systems are often difficult to trace and understanding why an AI system has chosen a specific outcome is highly complex. This is often referred to as the “black-box effect”. Issues: If an AI system malfunctions and produces an unintended outcome, we must be able to discern the cause to ensure that the outcome does not occur again. While manufacturers are obliged to report, address and prevent the issue from recurring, this will need ongoing attention.
In an AI-guided diagnosis, could, for example, a radiologist, be held liable if they depart from a diagnosis suggested by an AI-powered diagnostic medical imaging device that complies with the EU regulatory framework?

In considering the liability of a clinician who disregards the information provided by an AI-based medical device and chooses to diagnose a patient independently, the primary question is what laws govern any potential claim?

The EU MDR[4] regulates the manufacture and distribution of medical devices, and empowers Member States to penalise breaches in regards to these activities. The EU MDR does not regulate the activity of health care professionals (HCPs). Each HCPs must exercise clinical discretion in deciding how to treat their patient.

As such, where an HCP makes a divergent diagnosis, his or her conduct will be regulated through well-established tortious principles of clinical negligence (and professional discipline), rather than under EU MDR. The HCP will have been negligent if his or her conduct and the care provided to the patient falls below medically acceptable standards and directly causes harm. An HCP could be negligent if he or she (a) follows an obviously incorrect output from an approved system; or (b) chooses to disregard the output from the system.

The degree of trust that exists between clinicians and AI-powered devices is hard to assess and is something the EU predicts will be hugely dependent on the future decisions taken to regulate this technology. The creation of an “ecosystem of trust” is a key theme of the White Paper published last year. It remains to be seen whether the healthcare community will treat this technology with scepticism and caution, as a result of the autonomy, opacity and data dependence of the decision making, or whether these devices will be trusted and confidence placed in the regulatory process.

The EU MDR does not require devices to explain or render transparent their decisions to the end-user. However the regime does require manufacturers to eliminate or reduce the risk of user error. Further, device manufacturers are required to maintain comprehensive post-market surveillance systems. As such, manufacturers have an important role in building user trust.

Have AI algorithms used in medical devices been subject to the same rigorous testing and auditing standards followed for the assessment and deployment of other medical devices?

There are currently no plans to establish a unique, standalone regulatory route for AI-based medical devices in Europe, and the European Commission has spent significant time pondering how to apply the impending EU MDR and EU IVDR[5] (which both apply to software) to this unique category of products.

The route to approval under the EU MDR depends on the classification of the medical device. For the lowest risk medical devices (Class I) the manufacturer has sole responsibility for conducting conformity assessments against the regulations. Medical devices in the higher risk classes (IIa, IIb, III) and most in-vitro diagnostic devices (IVDs) are assessed by Notified Bodies, who are accredited to conduct conformity assessments. For all categories of device, a CE mark must be affixed to the device following conformity assessment before they are placed on the market.

Devices can change in terms of design and characteristics after conformity assessment, but only if this is in accordance with an established quality management system (QMS). Further, such changes generally carry regulatory obligations (for example, approval by the relevant Notified Body). Manufacturers are prohibited from promoting uses for a device other than those stated in the intended use for which the conformity assessment was carried out.

This means that while a manufacturer can place an AI device on the market which is able to change, this can only be within pre-defined scope and will carry regulatory implications. In the context of AI, this raises significant difficulties as: (a) the ability for an AI device to change its level of functionality via machine learning techniques can be difficult to limit in advance, and (b) new uses and functions could sit beyond those approved through the conformity assessment procedure as a direct result. This will significantly impact the scope of AI-based devices that are placed on the market.

Have regulatory procedures been fast tracked owing to the urgent demand for COVID-19 related diagnostic solutions?

In our view there does not appear to be any evidence to suggest that the established regulatory procedures have been fast tracked or sidestepped due to the current COVID-19 pandemic and the desire for new AI-based technology to aid the treatment of patients.

The UK government has issued guidance relating to development of software or apps specifically in response to COVID-19 and temporary exceptional use authorisation or derogation exemptions are available prior to full regulatory approval (which manufacturers must undertake to eventually secure). Applying for either of these derogations will still mean the provision of significant evidence to the regulator. Any derogations currently in existence across Europe are temporary and may well be lifted ahead of the EU MDR and IVDR coming into force later this year, which raises the bar for compliance significantly.

 

Download the full article in PDF format here.

 

Bristows Life Science Summit 2021

Following on from the success of our previous Bristows Life Sciences Summit on gene editing, we will be exploring the use of artificial intelligence in the medical sphere in another big debate in November 2021.

Keep an eye out on our events page for further details, and register your interest here.

 


[1] https://www.thelancet.com/journals/landig/article/PIIS2589-7500(20)30292-2/fulltext
[2] https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf
[3] https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020DC0064&from=en
[4] REGULATION (EU) 2017/745
[5] REGULATION (EU) 2017/746