On Thursday 12 January 2017, the Legal Affairs Committee of the European Parliament approved a draft resolution setting out proposed rules and recommendations for the regulation of robotics and AI. The resolution is based on a report prepared by a Working Group on legal questions related to the development of Robotics and Artificial Intelligence, which Bristows’ Robotics and AI team contributed to. The report outlines a possible framework for the regulation of robotics and AI, which it says have the potential to “unleash a new industrial revolution which is likely to leave no stratum of society untouched”.
Among the more interesting proposals raised in the report are:
• a new European agency for robotics and AI;
• a framework for rules dealing with civil liability of robots;
• a new category of legal entity for autonomous robots; and
• a suggestion of conferring IP rights on robots/AI.
The Legal Affairs Committee sets out a series of core principles that could form the basis of a specific liability scheme for robotics: (1) strict liability – requiring only proof of damage and a causal link between the robot’s behaviour and the damage suffered; (2) liability of an ultimately responsible human or corporate entity should be proportionate to the actual level of instructions given to the robot and its autonomy – so that the greater a robot’s learning capability or autonomy, the lower the human / corporate liability should be; (3) a compulsory insurance scheme (akin to motor vehicles), either robot-specific or industry-wide, supplemented by a compensation fund; and (4) a new category of legal entity for robots, called ‘electronic personality’.
The concept of ‘electronic personality’ for particularly sophisticated robots, with legal rights and responsibilities analogous to humans, has attracted particular media attention: ‘Should a robot have capacity to enter into legal transactions, to own property, and have a right to exist?’ ‘Is it right to ‘humanise’ robots in this way?’
However, in our view, this is not what the European Parliament is proposing here. Rather, the report suggests that the purpose of recognising robots as ‘electronic persons’ is to assign legal responsibility to robots for their interactions with third parties, including making good any damage they may cause. While not quite the sensationalist view we have seen taken by some popular media outlets, the concept of ‘electronic personality’ is an interesting proposal designed to ensure victims of harm by robots have effective recourse in circumstances where a human or corporate actor cannot be said to be responsible.
As we have discussed previously, sophisticated autonomous robots will make decisions and act independently of human input. In such circumstances, causality between a human (or corporate) act or omission and the damage suffered may begin to break down; traditionally, where legal personhood exists for non-humans, there is necessarily a human actor behind the scenes to pursue. Therefore, it is right that the committee recognises that robots and AI will be able to cause damage in their own right; allocating legal entity status to intelligent, self-determining machines is one way to allocate liability and ensure effective remedies are available for those harmed. In this regard, the committee’s proposal that a robots’ civil law liability is backed up by insurance schemes and compensation funds is a sensible one.
The committee’s proposals represent a guide for the upcoming debate on the actual regulatory framework that will need to be devised to address the emerging wave of robotics and AI that are being introduced to the market. We encourage the robotics industry to play a key part in that debate in order to help build a broad consensus as to the key principles that will underpin the regulatory environment. Bristows’ Robotics and AI team will continue to add to the discussion in this area, with upcoming articles and events to come in the future.