“Trust me, I’m an algorithm”

Following a hiatus on in-person events imposed by the COVID-19 pandemic, the Bristows Life Sciences Summit returned in style on 16 November, with a panel of leading figures from the healthtech sector discussing the question: “Is AI the future of healthcare?”

01.12.2021

A group of people sitting on a stage in front of a screen.

For the first time ever, the Summit was run as a hybrid event, with some 40 people attending in person at the Royal Society of Medicine and a further 200 tuning in online to hear the discussion, which was moderated by Dame Joan Bakewell CBE.

Our illustrious panel was composed of: Professor David Lane, Director of the Edinburgh Centre for Robotics and Member of UK Government’s AI Council; Professor Alan Winfield, Professor of Robot Ethics at UWE Bristol; Eleonora Harwich, Head of the collaborations AI lab at NHSX; and Johan Ordish, Group Manager for Medical Device Software and Digital Health at the Medicines and Healthcare products Regulatory Agency (MHRA).

Professor Lane kicked off the evening with his response to the question: What is ‘AI’ and what does it mean? His starting point was the definition of AI employed in the National Security and Investment Act: “technology enabling the programming or training of a device or software to perceive environments through the use of data, interpret data using automated processing designed to approximate cognitive abilities, and make recommendations, predictions or decisions, with a view to achieving a specific objective.”

As Professor Lane noted, this is a broad definition which actually captures quite a lot of fairly mundane technologies, including the automated Docklands Light Railway. Then again, it needs to be: AI is still a very long way from becoming a silicon-based superintelligence which renders humanity obsolete, despite Elon Musk’s confident predictions at the World AI Conference in Shanghai back at the start of 2019.[1] Instead, AI is becoming quite good at performing fairly specialised tasks: natural language processing, playing boardgames like chess and Go, driving vehicles. As Professor Lane also pointed out, AI is not yet mature enough to be left on its own to get on with performing a task unsupervised. AI has only been successfully deployed under human supervision, as in the case of the Autopilot function in Tesla’s cars, or in circumstances where the costs of failure are negligible, as is the case with in-home assistants like Alexa. AI is not yet mature enough to wholly replace human beings in contexts where failure could result in harm.

Instead of sci-fi visions of AI supremacy, Professor Lane had two main concerns about the deployment of AI: first, privacy, in that AI can enable an intrusive “panopticon” of data surveillance, which results in decisions being taken about people without their knowledge or consent. Second, bias in decision making by AI. As a resident of Edinburgh, Professor Lane leaned here on the amusing example of people with Glaswegian accents struggling to interact with voice activated technology. There are numerous other tragic examples though of medical technology failing the people it is meant to protect as a result of it not having been validated on a data set which is properly representative of the population.[2] In Professor Lane’s view, the solution to these issues is proper regulation, which has been lacking in recent years and is only now being developed through legislation such as the GDPR and the enforcement authorities starting to flex their muscles against the technology giants.

Professor Lane was followed by Professor Winfield, who addressed the question: How do we address ethical challenges that arise when AI is used in healthcare? Neatly following on from Professor Lane’s comments regarding the need for proper regulation and standards for AI, Professor Winfield pointed out that since Isaac Asimov first created his famed Laws of Robotics, we have been overrun with ever newer ethical principles for machine intelligence. Professor Winfield counted 86 currently in existence. The problem he identified is the gulf between first establishing such good ethical principles and then actually figuring out what those principles look like when applied in practice. Professor Winfield was firmly of the view that the way to bridge the gulf is by treating AI ethics as a super-set of AI safety: considering the psychological, social, economic and environmental harms that may result from the use of a new AI system, instead of just the potential physical harms.

Bristows Debate 2021

The way in which the safety of new technologies is typically ensured is through the creation of technical standards with which those technologies must comply, and indeed such standards are already deeply entrenched in product safety regulation in Europe. Professor Winfield went so far as to describe technical standards as the “infrastructure of the modern world”, which is hard to disagree with given how pervasive they are. Standard setting for AI is something which Professor Winfield is already engaged in, as he was involved in the development of BSI’s ethical guidelines on robotics and IEEE’s proposed international standard on AI transparency.[3] He described these sorts of standards as the equivalent of an MOT for artificial intelligence, and rejected any notion of placing trust in AI systems which have not been subjected to them.

Eleonora Harwich followed Professor Winfield, answering the question: What can be done to build and maintain trust amongst all stakeholders in the healthcare industry when it comes to AI? Harwich focused on the work that she and her unit are doing at NHSX to demonstrate the potential of AI to do good and so to build public trust in it. Following on from the previous speakers, she agreed that regulation and standards are at the heart of building trust, but reminded the audience that when AI is used as part of a medical device, extensive regulation for it already exists. The problem is that existing medical device regulation requires the evaluation of clinical evidence regarding the safety and performance of the device, and AI devices are currently let down by a lack of standards for the reporting of clinical evidence on their safety and performance. This is an issue which Bristows has encountered before and which to some extent applies to all software medical devices, not only AI. Fortunately this is an issue which NHSX is trying to address through its AI Award.

Other barriers to trust in AI technology which Harwich is working on overcoming, include the staggering complexity of the regulations which AI developers are currently subjected to and the need for the NHS workforce to develop the skills which are required to work effectively with new AI systems.

An interesting issue reported by Harwich which arose from consultations with patients was the importance of communication to ensuring trust in AI systems. That statement alone may seem trite, but the interesting part was the importance that the patients apparently placed on being provided with an explanation of the technology which was not so dumbed down that it lost nuance. Patients seemingly have a desire to properly understand the technology they are confronted with and are willing to put in the work to engage with it at a detailed level. Giving the private sector access to NHS patient data was also apparently an area of concern, but not in the way which might have been expected: rather than having particular concerns about privacy, patients were apparently more concerned about ensuring that the NHS retained ownership of that data, indicating a desire not to let valuable NHS assets slip into private hands. Our own Bristows survey on the topic of public trust in AI revealed a similar concern.

The panellist’s speeches were rounded out by Johan Ordish, who addressed: How should AI in healthcare be regulated? To Ordish, regulation of AI in healthcare is all about proportionality to the risk that it poses, the same as any other technology which is incorporated into a medical device. While it is true that AI can pose risks because such systems can be data hungry, can suffer from a lack of validation, and can make decisions which are inadequately transparent, they need not be. In that sense AI systems, like cats, come in all different shapes and sizes. Ordish asked rhetorically why we should treat this immense variety of different AI systems as all being equally risky and so regulate them homogeneously, when we don’t regulate ownership of house cats in the same way as ownership of lions?

Starting from the premise that AI should be regulated in proportion to the risk it poses, Ordish went on to question whether all AI systems which are deployed in different sectors should be regulated in a blanket manner. He drew attention to the recent proposal for a regulation on artificial intelligence from the European Commission, which identifies certain categories of AI system as “high risk” and would subject them to a conformity assessment procedure before they are allowed to be placed on the European Union market.[4] Based on the idea that the specific risks posed by AI systems will depend on the particular context in which they are used, Ordish raised the possibility that such legislation would regulate all types of AI but would not regulate any one type well. This is not to say that Ordish was in any way against strong regulation for AI: as his speech drew to a close, he referred back to the question put to Eleonora Harwich, which was how to get people to trust AI. In his opinion, this was the wrong question to be asking. The right question is how do we get AI to be trustworthy in the first place. Based on the views expressed by all of the panellists on the night, the answer to that question is deceptively simple: good regulation, and technical standards for manufacturers to follow.

The panellists’ speeches were followed by a Q&A led by Dame Joan Bakewell.

The panellists’ speeches were followed by a Q&A led by Dame Joan Bakewell, in which the audience had the opportunity to ask questions of the panel by posting them through the online platform on which the event was being livestreamed. Ordinarily this is where the technology would cause some wrinkles, but on the night the platform performed seamlessly, demonstrating that hybrid events need not be clunky if the technology enabling it works as intended.

The questions from the audience ranged over a variety of topics. One question invited the panel to give their views on the risk of overreacting to the dangers of AI and so stifling innovation through over-regulation. Another audience member wondered whether AI systems which are provided directly to consumers should be regulated in a different way to AI systems which are only used by healthcare practitioners in a clinical context. Each question elicited considered and thought-provoking responses from our panellists. Eleonora Harwich observed that over-regulation of AI devices can result in manufacturers littering their devices with disclaimers in order to avoid being subject to the regulations, which may be harmful to consumers who rarely read the product description in full.

However, the key theme running through the Q&A and much of the evening was AI transparency and what it means in different contexts. Dame Joan Bakewell pressed the panel hard on what it means for AI to be transparent to the patients on whom it will be used, and encouraged them to address how patients can feel comfortable placing their trust in a technology which they are unlikely to understand. Johan Ordish offered the answer that transparency means different things to different people: to clinicians it generally means being able to understand how an AI system reached a particular decision so that they can sense-check its reasoning, while to patients it is likely to mean understanding how the system makes use of their data and what the implications will be for them of that AI system’s decisions, and that AI manufacturers need to be responsive to that. Meanwhile Professor Winfield and Professor Lane were broadly in agreement that the route to patient trust is mandatory technical safety standards and rigorous testing. In this regard, Professor Winfield pointed to the level of trust which consumers place in the aviation industry whenever they get on a plane: long-established safety standards which are deeply embedded in the industry mean holidaymakers routinely place their life in the hands of machinery of which they have little understanding.

The evening ended in the same way in which it began, with the audience being asked the question “is AI the future of healthcare?” The consensus opinion at the start of the evening was an unequivocal “yes”, and despite a few waverers, that consensus had barely changed by the end. Listening to some of our panellists it almost seemed that AI was already part of the present state of healthcare. It will be exciting to see over the next few years how the world responds to this bold new technology.

Download our white paper on public attitudes to AI here.
Watch the full video of the Bristows Life Sciences Summit 2021 “Trust me, I am an algorithm.” Is AI the future of healthcare?

—————–
[1] Transcript of debate with Jack Ma published by WIRED, 9 January 2019. https://www.wired.com/story/elon-musk-humanity-biological-boot-loader-ai/
[2] For instance, the recent revelation that oximeters are less accurate on people with darker skin: https://www.theguardian.com/society/2021/nov/21/sajid-javid-announces-review-racism-bias-medical-devices
[3] BS 8611:2016; IEEE P7001
[4] EU Procedure Number 2021/0106/COD

Jamie Hatzel

Author

Related Articles