AI technology will transform our world with the same impact as the first industrial revolution transformed society. Sectors from healthcare, transport, pharmaceuticals, farming and manufacturing can reap the benefits of using AI. Currently, AI is being used to trace the spread of COVID-19 and also identify potential vaccines and antibodies to fight the new coronavirus. In short, AI is a game changer and has the potential to benefit large areas of society.
However, with any emerging technology there are risks and in order to harness the benefits of AI, a structured approach needs to be adopted in order to guard against such risks. The European Commission (EC) White Paper on AI released 19 February 2020 proposes a strategy to ensure successful uptake of AI within the EU and is divided into two frameworks:
- The Policy Framework; and
- The Regulatory Framework
The Policy Framework (aka The Ecosystem of Excellence)
The aim of the policy framework (which the EC calls the “Ecosystem of Excellence”), is to make the EU an international leader in AI research and development (R&D). The EC recommends 6 actions that regulators should take in order to realise this aim:
- revise the Coordinated Plan to be adopted by the end of 2020. The Coordinated Plan is a detailed document outlining 70 joint actions between member states and the EC regarding investment, research, skills and commercialisation of AI
- create centres of excellence for the innovation and testing of AI
- improve the skill levels of the workforce who are developing AI solutions by attracting world class scientists and eminent professors to teach degree level AI courses
- construct digital innovation hubs in every member state, each of which has its own specialism in AI to aid uptake of AI by small and medium enterprises (SMEs)
- spearhead a private/public partnership in AI, data and robotics to ensure coordination of R&D between academia and industry; and
- promote adoption of AI by the public sector particularly in the healthcare, utilities and transport sectors.
The Regulatory Framework (aka The Ecosystem of Trust)
With any new technology there is a need to inspire confidence in its use before it is employed. Many people see AI as ‘untrustworthy’ due to the complicated way in which it works and which they do not necessarily understand. AI also relies on large data sets to work, some of which could be personal and there are questions being raised about how this data is collected and to what purpose it is to be used.
There are also anxieties about AI being used for malicious purposes and so in order to allay such fears and build trust amongst the general population, the EC has identified seven key requirements that AI technologies should respect in order to be considered trustworthy:
- human agency and oversight – The principle of human agency is that a human should always be in ultimate control of an AI system. For example an AI vehicle would not start without a human inserting a key and the vehicle can be manually overridden at any point if it malfunctions. The autonomous vehicle sector where the handover of control from human to machine and back again is one of main areas of contention and rife with practical problems
- technical robustness and safety – The results from AI systems should be reproducible, predictable and accurate and AI systems should also be protected against cyberattacks
- privacy and data governance – AI systems typically ‘learn’ from enormous sets of data from which they deduce relationships between the variables in such data. The data may be personal and AI may be able to infer gender, sexual orientation, race, political views and religious views. The sensitivity of this data is obvious and so it is incumbent on developers to ensure that AI systems do not misuse this data or leak this information
- transparency – The inner workings of an AI system should be clearly explainable to all parties so that everyone (even those of a non-technical background) can understand the basic principles to which they work
- diversity non-discrimination and fairness – An AI system is only as good as the data it uses to learn. If the data fed to the system is inherently biased or not representative then the AI system will make biased and discriminatory decisions. It is therefore necessary to have controls on the quality of the data used
- societal and environmental wellbeing – AI systems should be used to benefit society as a whole rather than individuals. In addition, sustainability and ecological impact should be taken into account when developing AI systems; and
- accountability – If an AI system malfunctions and potentially causes harm then it should be possible to trace the manufacturers of the AI system to bring them to justice. This is a difficult task because there are many parties involved in the creation of AI systems from the beginning to the end of the supply chain. AI systems should be traceable and should be audited at each stage of the supply chain in order to ensure their compliance.
In addition to the work highlighted above, the EC White Paper also sets out a proposed approach to the development of a regulatory framework which it sees as being necessary for the proper development and take up of the technology across the Member States. This includes procedures, regulations, guidelines and codes of conduct to promote faith in AI. The proposed regulatory framework consists of:
- creating a new legal definition of AI – This is not a new requirement and seems long overdue. How does the EC expect to be able to lead any proposal for future regulatory or legislative development if there is no agreed definition of what this technology actually encompasses?
- amending existing legislation – The EC proposal is that existing legislation should be amended to keep up with the pace of technological change and to provide for ‘risks’ that were not contemplated prior to the advent of AI. Again, this is long overdue and requires an understanding of how AI impacts on society across those industry sectors – sector by sector – which are leading the take up of AI. A general ‘AI Law’ would be too general and potentially more harmful if it avoids the specific issues.
- creating an authority for enforcement of the 7 key requirements – Although the EC suggests doing this in the paper, it does not give this authority a name
- developing standards for testing, inspection and certification. This is certainly required in order to understand what is ‘safe’ and what is not and provide some clarity around the duties of care that developers and manufacturers must have towards the public.
The EC White Paper closes with an invitation for the public to comment on the proposals up until 19th May 2020 or whichever revised deadline comes into place following the impact of the Coronavirus Pandemic.
The EC proposes to use any comments to shape its policies for the future and so this is a welcome opportunity to try and influence the debate by setting out the potential benefits and pitfalls of AI.
In short, however, we believe that this White Paper does not take us any further than we were two years ago. We agree that AI has potential risks and potential benefits but if the EC still cannot put forward a definitive definition of what ‘AI’ is, then what are we actually talking about?
National and supranational institutions need to stop generalising about ‘AI’ and understand that its development and application in society is moving on apace without them and unless they begin to address specific issues in specific sectors, then a ‘one size fits all’ announcement on regulation will be outdated and ineffective.