On 23 March 2021, the UK Government published its response to its call for views on artificial intelligence and intellectual property. Between 7 September and 30 November 2020, a range of organisations and individuals, including those who are currently commercialising and/or using AI technologies, gave their views on a range of questions such as – should IP protect inventions or works that are made by machines? If so, who should own that IP?
Overall, those who responded were positive about the future where AI supports human researchers, creators and inventors in developing new technologies. See here for a summary of the various views that were submitted, and details of the next steps that the UK Government is planning to take. Here are some of the key highlights for those in the life sciences sector:
1. Those responding to the consultation generally agreed that the UK’s current IP laws are able to meet the challenges presented by AI in many areas – for instance, in relation to liability for infringements by the use of AI systems.
2. There was a consensus that AI itself should not own IP, but there were differences in opinion about whether inventions or copyrights works created by AI should benefit from IP protection. However, most of the resistance to protecting AI-generated output appeared to come from artists and others in creative industries (as opposed to those in the life sciences and technology sectors), and was largely directed at copyright works. Even though there was also a range of different views about affording protection to AI-generated inventions, overall this seems to have received much more positive support.
Patents for AI inventions
3. Can AI be an “inventor” for patenting purposes? Some respondents from innovative industries thought that it should be possible to patent inventions that have been created with the assistance of, or even devised by, AI. However, there were mixed views about whether it would be appropriate to list AI as an inventor on a patent. Some thought it might be acceptable, others did not. Amongst those who thought AI should not qualify as an inventor, some suggested clarifying the law so that an inventor includes “a person by whom the arrangements necessary for devising an invention are undertaken”. Others thought there was no need to change the laws around inventorship, as AI-generated inventions were already patentable. A few people suggested doing away with the concept of an inventor altogether for AI-generated inventions.
Some respondents’ answers were influenced by the fact that they were sceptical about whether, in practice, it would be possible for an AI system to devise an invention without some sort of human involvement. Those respondents consequently thought that the current inventorship criteria for patentability should not change. Whilst noting this, the UK Government highlighted that there were many people who argued that the current approach on inventorship potentially has a detrimental impact on innovation and could result in a lack of transparency in the innovation process. The UK Government is therefore proposing to:
- carry out a consultation later this year on a range of possible policy options, including legislative change, for protecting AI-generated inventions which would otherwise not meet the current inventorship criteria; and
- commission an economic study to enhance the UK Government’s understanding of the role that the IP framework plays in incentivising investment in AI, alongside other factors.
4. Who should own any invention that an AI system creates? Even though all respondents agreed that an AI system itself should not own any invention, there was no consensus about who should own an invention that an AI system generates or the resulting patent. Some said that it should be the AI system’s “owner”, whereas other respondents thought that the AI’s owner, its developer or even its user were all “obvious possibilities” for entitlement to an AI‑generated invention.
5. Do the exclusions from patentability need revisiting for AI inventions? The majority of respondents thought that the inability to obtain patents was a problem for the AI sector. This was believed to be more of an issue for protecting developments in AI systems themselves (more so than for inventions that are generated wholly or partly by AI). However, most thought that UK patent laws on exclusions from patentability were fit for purpose when it comes to AI software, but nevertheless there was a need for greater clarity and predictability in the UK Intellectual Property Office’s patent exclusion practice. The common view was that the UK IPO should change its practice (rather than there being any need for legislative change) and there was also a call for international harmonisation – with some suggesting that the UK IPO should adopt an approach more akin to that of the European Patent Office.
The UK IPO is now proposing to review its patent practice and establish any differences in outcome for AI patent applications filed at the UK IPO and the EPO, with a view to publishing enhanced IPO guidelines on AI inventions in due course.
6. Disclosing AI inventions (sufficiency issues). There was general agreement that AI patent applications are able include enough detail for a skilled person to perform an invention. However, a number of respondents noted that there may be challenges if AI patent applications are to satisfy the requirements for disclosure. Some were concerned about the practicalities of filing large amounts of information as part of AI patent specifications, mentioning that data could include training data sets as well as algorithms, for example. Some respondents were specifically looking for a new deposit system that would enable applicants to file large amounts of supporting information. However, other respondents disagreed and thought that large volumes of accompanying information should not be needed.
The UK Government is now proposing to work with stakeholders and other international partners on the feasibility, costs and benefits of a deposit system for data used to train AI systems that are disclosed within patent applications.
7. Inventive step. The large majority of respondents thought that the current legislative framework on obviousness was flexible enough to deal with AI innovation – their view being that “the person skilled in the art” has a range of tools available to them and that AI technologies will be one of those tools. Most respondents therefore did not consider it necessary to extend the concept of “the person skilled in the art” to the “machine trained in the art” – although some thought this may become necessary at a later date if AI‑generated inventions come to dominate an area of technology.
Copyright and data used to train and develop AI systems
8. Copyright. Many respondents believed that current copyright laws are generally adequate and adaptable enough to cover situations where copyright materials are used to train or develop AI software. However, some respondents (particularly technology companies and researchers) said that current copyright restrictions make getting access to works difficult, and this potentially leads to bias in AI systems. Others thought that voluntary licensing was adequate and available to those who needed it, and that any exceptions to copyright infringement for the purposes of teaching AI (e.g. text and data mining for commercial research) would prejudice the legitimate interests of copyright owners.
The UK Government is now planning to review the ways in which copyright owners license their works for use with AI and to consult on measures to make this easier, including improved licensing or copyright exceptions to support innovation and research.
9. Data. There were very few comments in the responses about the protection of data more generally (as opposed to copyright works), despite how significant data is in relation to developing and training AI systems. The report mentioned just one respondent who proposed that there should be a new form of protection for data – to preserve its value, whilst at the same time encouraging data sharing for AI initiatives and also to ease concerns about disclosing data to patent offices in support of patent applications. A few other respondents mentioned that a new form of data protection might be needed to allow data to be filed with AI regulators (which is likely to be different to the data which would need to be filed as part of a patent application), so that the confidentiality of that data can be preserved.
Despite reporting these comments, the UK Government has not been spurred into taking any action to review the protections that are currently available in respect of data (at least, not yet). Sui generis database rights tend to have less relevance in the sphere of AI. This is because database rights do not protect the investment in creating the data in the first place, only the efforts in subsequently maintaining, updating and/or presenting the data. For the foreseeable, those holding datasets will need to protect and extract value from them by using traditional contractual mechanisms and relying confidentiality/trade secret laws (assuming the data has not already been published).
Following on from the success of our previous Bristows Life Sciences Summit on gene editing, we will be exploring the use of artificial intelligence in the medical sphere in another big debate in November 2021.