AI Summit: What the speakers learned

Lessons and key takeaways from the speakers at the Bristows Life Sciences Summit

31.05.2022

The main sign of a successful debate is whether the people involved – speakers included – go home having learnt something new, having widened their perspective on the topic discussed. We asked the four speakers at our 2021 Life Sciences Summit on AI in the medical sphere to tell us what food for thought they brought home and it seems that “AI MoT” is the key word.

What did you learn from the debate?

Alan Winfield, Professor of Robot Ethics, UWE I was delighted to learn about the wonderful work of health technology testing and regulation from co-panellists Ele and Johan.  I hugely enjoyed the debate and audience Q&A, brilliantly chaired by Joan Bakewell, which really underpinned for me the importance of engaging both clinicians and patients early in the healthcare AI design process. Thank you!

Eleonora Harwich, Head of Collaborations, NHS AI Lab We had a fascinating discussion, but I think what really struck me was how Baroness Bakewell’s questions reminded us of the importance of being able to translate technical conversations into “what does this mean for patients” and what are the avenues that patients have for redress and remediation.

Johan Ordish, Group Manager (Medical Device Software and Digital Health) MHRA What a brilliant discussion – with more insights than I have space to record. I was struck that there was a strong and consistent message from all speakers that what’s required is a “MoT for AI” that is supported by standards so that manufacturers can have clear processes to demonstrate that they meet those requirements. It’s encouraging to hear that echoed outside of MHRA, as that’s consistent with our approach to ensuring AI as a medical device is safe, effective, and available for UK patients

Professor David Lane CBE, CoFounder of the Edinburgh Centre for Robotics and National Robotarium, Heriot-Watt and Edinburgh Universities, Member of UK Government’s AI Council I was delighted with Alan’s descriptions of the analogies with certification in the aerospace industry (which have been so successful keeping us safe) and the need to design “MoTs” for AI systems. This probably means switching any machine learning off once they are trained, and re-certifying if the learning is switched back on for any reason. It was also useful to hear other speakers talk about the importance of avoiding knee jerk regulation that can inhibit innovation.

What were the key takeaways?

Professor David Lane Triggered by the event, I picked up two books afterwards worth reading. First, I recommend the good dose of pragmatism in Gary Marcus and Ernest Davis 2019 book “Re-booting AI”. Although we have recently made enormous leaps in machine learning, there is still much more to do before all the promises and threats are realised. And second, “The Age of AI” co-authored by Dr Henry Kissinger (now 98 years old) and Eric Schmidt (former Google CEO), newly released. They draw an analogy between the nuclear non-proliferation treaties of the 20th century and how to handle the future with advanced AI. The devastating potential of nuclear weapons out of all proportion to any policy objectives, and the threat of mutual assured destruction creating stability through possible annihilation has resonance with where Governments or private companies could regulate the most advanced AI systems.

Alan Winfield For me, it was that – although more is needed – there is already a pretty solid approvals process for health tech, including AI.

Eleonora Harwich A key takeaway was to ensure that we are developing and deploying AI systems in healthcare that genuinely contribute to human flourishing – by improving patient outcomes or making a tangible difference to people on the frontline – it all comes down to standards, regulation and compliance.

Johan Ordish I’d also add that the balance between the UK setting a high bar for innovative products such as AI and that bar stifling innovation is a fine one to weigh; the broad litmus test to answer that question is whether those products are trustworthy; that the question that should frame any UK “AI MoT” that might emerge.