First published in our Biotech Review of the year – issue 9.
The traditional double-blinded, randomised, controlled clinical trial has for a long time served as the gold standard for generating the reliable and unbiased data that could form the basis of approving new therapies. However, the strict requirements for clinical trials also mean that the trial stage is one of the costliest periods in the development cycle of a therapy and one that carries a significant risk of failure. The attrition rate of each clinical trial phase means that on average a candidate drug tested in phase I has a 14% overall chance of obtaining regulatory approval. Although many of these trials fail due to an inherent issue with the trialled therapy, such as its efficacy or safety, a large number also fail as a result of problems with the trial itself, for example a flawed study design or excessive participant drop-out.
The complex and multifaceted nature of randomised clinical trials means elements of the trials are ripe for targeted improvements using AI technologies, with the overall aim of streamlining the trial process, reducing the time and financial commitment and increasing the likelihood of successful outcomes. In some instances, AI-leveraged improvements have the potential to amount to a radical paradigm shift in how we assess the safety and efficacy of therapies. In this article we consider a few of the areas where emerging AI interventions are beginning to have an impact on how clinical trials are run.
Any clinical trial is reliant on a detailed protocol which has been put in place prior to the commencement of the trial and which sets out all aspects of how the trial is to be run, from the dosing regimen and the inclusion and exclusion criteria of participants to the endpoints by which success and failure will be measured. It is crucial to make the right choices when designing the protocol, as although in many instances protocols will be amended during a trial, any subsequent change to a protocol in terms of the trial design is likely to lead to months or years of further delay and significant additional financial cost.
In order to pin down the large number of variables in a trial protocol, researchers can inform their decision-making by referring to the data contained in previous successful and unsuccessful trials, patient clinical data and regulatory filings. AI algorithms are able to process and analyse the vast quantities of data from these varying sources more effectively than a team of human clinicians could and run simulations to predict how changing a particular variable such as an aspect of the participant inclusion criteria can impact the overall likelihood of trial success. At this stage in their development, the trial design AI algorithms are more suited to helping researchers identify potential pitfalls in their proposed protocols, but as the technology advances the algorithms could be harnessed to design the protocols themselves based on specific starting criteria set by the researchers.
The paradox of the current clinical trial landscape is that although a failure to recruit sufficient numbers of participants is the most common reason for trial termination (in some analyses amounting to 55% of all terminated trials), there are large numbers of patients who would like to enrol in a trial to treat their condition, but are unaware of the opportunities available to them. Placing the onus of finding a suitable trial on the patient through searching huge databases such as ClinicalTrials.gov or on the patient’s treating physician having heard of ongoing trials can lead to an unfortunate mismatch between resource and opportunity. The complexity of the inclusion and exclusion criteria in many study protocols may also mean that the candidate participant is not well placed to judge whether they could enrol in a trial.
Natural language processing (NLP), an application of AI with the goal of enabling a computer to understand the meaning of human language in the form in which it would normally be written or spoken, can be harnessed to analyse large swathes of the patient medical records of the general public to determine those that would qualify as participants in a particular trial. California-based health-tech company Deep 6 AI is one of the frontrunners in this field, using NLP to process unstructured medical records such as handwritten doctor’s notes to extract clinical data points like symptoms and diagnoses which the software algorithm can then match to relevant recruiting clinical trial criteria. Deep 6 AI’s algorithm is even able to identify patients with conditions that aren’t expressly referred to in the medical records.
Whereas using AI to inform trial protocol design may rely to a large extent on data from past trials which is publically available, in the case of participant recruitment the underlying data necessarily constitutes patients’ private medical records. A key question for companies operating in this space will be how to gain access to sufficient quantities of such records in a transparent manner that complies with privacy laws in order to train the companies’ algorithms and provide a service that allows identifiable patients to be recruited for trials.
Synthetic control arms
A key reason why many eligible patients are not willing to enrol in clinical trials is the fear that they will be randomly allocated into the control arm of the trial and as a result, instead of benefitting from the candidate therapy, they receive a placebo treatment or the established standard-of-care. In addition to contending with this psychological barrier to participant recruitment, trial sponsors also need to deal with the resources required to run the control arm which typically involves the same number of participants as the arm receiving the candidate therapy. In indications such as oncology or rare diseases where the standard-of-care is expensive, the control arm may form a considerable proportion of the overall study budget
The increasing availability of real-world data has led to the emergence of synthetic control arms as a potential solution to these problems. Instead of relying on data from trial participants that have received a placebo, synthetic control arms create a statistical model derived from external data which can serve as a comparator for patients that have received the candidate therapy. The external data can comprise of the results of previous trials, medical records representing routine care and insurance claims data.
The use of a synthetic control arm is most suited for indications such as certain types of cancer where the standard-of-care is well-established and the progression of the disease is relatively predictable. Rare diseases where only a very small patient population is available to conduct a trial and all participants typically receive the investigational medicine would also benefit from the availability of a synthetic control.
The use of AI technologies is key to mine and analyse sufficient quantities of clinical data in both structured and unstructured forms to develop synthetic control arms that are reliable enough on which to base regulatory decisions. In recent years key regulatory authorities have been showing more willingness to consider synthetic control arms. In October 2020 the FDA set a new precedent by supporting the use of a hybrid external control arm in a phase III trial in recurrent glioblastoma.
The hybrid external control arm combines conventional randomised patients with a synthetic control arm developed by Medidata Acorn AI, a company which can leverage its development of synthetic control arms through use of its proprietary pool of more than seven million anonymised patient records. Without eliminating the traditional control arm entirely, this arrangement allows fewer patients to be enrolled to receive the standard-of-care which for recurrent glioblastoma is known not to prolong life significantly.
Synthetic control arms and the other AI-leveraged methods mentioned above may not necessarily find their way into all future clinical trials, but they represent tools that can bring significant improvements in trial efficiency when used appropriately. As with other AI technologies, the availability of high quality data will be key for the further development and more widespread adoption of these approaches, but it is encouraging to see the number of new companies working towards transforming the clinical trials landscape.