Seminar: Value recognition in AI transactions – Bristows and One Nucleus panel event

11.05.2022

Following on from Bristows’ review of AI and life sciences commercial models in January 2022, Bristows hosted a live panel discussion of expert commercial stakeholders on value recognition in AI transactions, in association with One Nucleus. The event took place in-person on the 29th March 2022 and was chaired by Bristows’ Commercial IP partner, Claire Smith alongside an excellent panel that included Adam McArthur, Assistant General Counsel, Digital, IT and Operations at AstraZeneca, Jason Rice Assistant General Counsel (Patents) and IP lead for Digital and Data at GSK, and Steve Gardner, CEO at AI company Precision Life.

The panellists had a lively discussion about the commercial impact of AI in the life sciences, debating some thought provoking topics including why AI companies are turning away from a service model, the use of AI applications in personalised medicine, recruitment difficulties and how to make the most of data. In this article we recap the panel discussion for those who attended and summarise the key themes that came out of the event for those who couldn’t make it.

Sliding scale of commercial models

Various commercial models are used in AI life sciences deals, from service model agreements to traditional life sciences drug discovery deals. However, all of the panellists agreed that as yet, there is no market standard. However, they said that a general rule of thumb is that the closer to the novel drug target the AI model is, the more likely it is that the chosen model will resemble a traditional life sciences drug discovery deal that includes upfront payments, royalties and milestones – some of which can be substantial.

According to the panellists, pharmaceutical companies are adapting their traditional agreement templates to meet the needs of an AI deal, such as by including clauses that deal with data and very specifically scoping out the field of use. They said that often, pharmaceutical companies are choosing to retain an option over additional fields for an annual fee given the inherent uncertainty over what an AI model might discover and therefore what the applications might be (see more on this in the “Differences in approach” section below).

The closeness of the AI to the ultimate product or service will also necessarily impact on the amount of risk that an AI company takes on. For example, if the AI is being used to guide clinical decision making this means that the AI developer should be expected to take on more risk – more than is perhaps customary in traditional life sciences drug discovery deals. For pharmaceutical companies, these risks are familiar ground but this is not always the case for AI companies, who are less comfortable or accustomed to the risks associated with the clinical development of pharmaceuticals.

At the same time, there are other factors that are driving AI companies towards the more traditional models and away from a service model, namely because the latter can be regarded as “low growth” and therefore less attractive to investment, an important consideration especially for growing companies. Nonetheless, if the AI input is quite removed from any substantive intellectual property output, the deal is still likely to reflect a services model more closely.

Differences in approach

One of the difficulties of using traditional life sciences drug discovery models in this space is that they have to accommodate the potential for AI collaborations to produce non-traditional and unpredictable intellectual property rights, such as analytical models and research tools, not just drug candidates. This requires the contract to be flexible and, as far as possible, future-proofed in comparison to the traditional pharma deals which the panellists said were well-known and quite prescriptive in contrast. For AI deals, agility needs to be built in with options and reversion mechanisms.

This flexible approach is something that comes naturally to AI companies, who the panellists said have a strong desire for deals to be agile. However, this can clash with the approach taken by pharmaceutical companies, which often have a more rigid, traditional way of working and need to try, the panellists said, to have more of a technology mind-set. This includes moving away from the idea that there will be patent protection for the AI outputs, something that is very different for pharma.

To avoid the differences in culture becoming an issue, the panellists said that establishing a good cultural fit between the AI company and pharma company was essential to making a project a success and they recommended that pharmaceutical companies choose AI partners selectively rather than seeking to partner with lots of AI companies. From a contractual perspective, robust and flexible governance mechanisms were cited as important.

Another factor that can impact a deal is whether pharmaceutical companies have an AI department in-house. For those that do, such as GSK, this can mean that they know what their priorities are going into an AI deal. Over the coming years, we expect an increasing number of pharma companies to establish dedicated AI departments.

Personalised medicine is a key application for AI

One discipline that AI could have a big impact on is “personalised medicine”, a concept that has been hailed as the future of healthcare ever since the human genome was sequenced. Yet until now, this promise has been hard to fulfil. The panellists agreed that this is a key area in which AI is very likely to make a real difference. This is primarily because personalised medicine relies on identifying genetic or biochemical markers that indicate the underlying mechanism of a disease and that a person is likely to respond well or badly to a treatment. Finding these markers requires analysis of vast quantities of patient data, something at which AI is adept.

The panellists said that using this approach to stratify patients by disease mechanism could potentially help pharmaceutical companies to design smaller, quicker and ultimately better clinical trials and at the other end, to tailor effective treatments to patients using diagnostics and related clinical decision tools.

Of course, not all pharmaceutical companies are familiar with developing diagnostics (especially in-house). So the panellists noted that, this AI application will be a learning curve for some, while for pharmaceutical companies that have an established diagnostics division, they are at an advantage. But given that the age of blockbuster drugs is largely considered to be over, stratifying patients to a medicine that will work better for them was acknowledged by the panellists to be an important avenue of research in which AI is being used.

If, however, the diagnostic is not the primary goal for an AI life sciences deal, which may be chiefly detecting disease targets, the question might arise as to who has the rights to further develop and exploit a diagnostic or clinical decision support tool – the AI company or the pharmaceutical company? It seems unlikely that many AI companies will seek to enter this space directly given the significant changes and investment required to be able operate within the framework of medical devices regulations. Instead, they would likely partner with an established diagnostic developer.

Recruitment

A tangential but hot topic discussed by the panellists was the difficulty that AI companies and pharmaceutical companies face in recruiting the best employees in AI for life sciences applications. As already mentioned, some pharmaceutical companies are establishing in-house AI departments. This means that both pharmaceutical companies and healthcare AI companies have found themselves competing for staff against technology giants that have “fancy” headquarters in central London and offer much larger salaries.

This has meant that some pharmaceutical companies that are trying to establish in-house AI teams, have decided to set up satellite offices in central London purely to attract the best staff. Ultimately both AI companies and pharma, hope that the reward of doing something worthwhile for patients would be enough to attract many of the best AI software developers to the life sciences sector.

Data

Data is one of the key differentiators of AI deals in the life sciences. The panellists discussed how, depending on where the data has come from, the issues with data can vary. For example, when dealing with publicly sourced data from the NHS or from a charity, there is an expectation that if a product comes out of the use of this data that the NHS or charity will in turn derive some benefit. The nature of any products that come out of a set of data can be hard to predict, but some AI companies are offering a flat royalty on any commercialised product. But this is not necessarily the norm and considering the difficulty in valuing data and its contribution to eventual products, traditional royalties on net sales are certainly not a given.

Conversely, when an AI company uses a pharmaceutical company’s data, the panellists said that an important concern is that the data is not shared inappropriately when AI companies do deals with, or are acquired by, other pharmaceutical companies. Strategies for protecting the data include setting up firewalls and having “clean teams” at the AI company as well as segregating the data within the organisation. The panellists said within the contract itself, important provisions are the change of control clause should the AI company be bought by a competitor pharma company and clauses that ensure the company must provide certain data on-demand to the pharmaceutical company when they need it e.g. for regulatory filings.

Further strategies being explored to allow access to data include consortiums of companies that share data but where they agree to contractual and technical protections for the data that mean each member of the consortium can only access certain other parts of the data set on application. They also cautioned against doing exclusive data deals, saying that this limits the benefit of the data.

The panellists pointed out however, that more data is not always better – the data needs to be married to the application and to the methods of analysis employed by an AI model. Depending on the AI tool, the insights from a dataset can be more or less useful.

Next up

Listen to the full discussion, with plenty of additional insight from the panellists, in the audio recording available above. Keep a look out for more articles and upcoming events from Bristows on this topic in the future. For more information on the work we do, see our Technology sector page.