The Commission’s proposed Artificial Intelligence Regulation

European Commission publishes draft proposal for a Regulation on a European Approach for AI

05.05.2021

A little over a year since the Commission sketched out its potential policy options for AI legislation in a White Paper on AI, and the fruition of an extensive panoply of workshops, forums, expert groups, reports, resolutions and recommendations as well as hundreds of consultation responses to the White Paper itself, the Regulation[1] is the Commission’s trail blazing first attempt at formulating law to apply specifically to Artificial Intelligence.

The focus of the Regulation is on AI that the Commission terms “high risk”. Much of the Regulation is drawn from approaches to regulating potentially harmful products found in existing EU safety legislation, but reworked to address potential risks in the development and operation of products and services that utilise software powered by machine learning and other AI techniques.

Notably, the Regulation will not be the only draft legislation for AI that we’ll see from the Commission this year. The White Paper was accompanied by a report on liability, that contained detailed proposals to reform the rules governing how liability is determined in civil claims related to AI. It also contemplated an overhaul of product safety legislation such as the Machinery Directive, and the Product Liability Directive. The Regulation alludes to separate initiatives in these areas.

The Regulation is ambitious. If implemented in anything like its draft form, it will have a significant impact on the development of AI products and services in the EU and the UK, and no doubt further afield.

In this article we summarise what the Regulation purports to do, how it would operate and what it does and does not cover. In subsequent articles we’ll offer further analysis of what it would mean in practice, and explore specific areas in more detail.

Defining AI

The Regulation defines Artificial Intelligence using the term “AI system”, meaning “software that is developed with one or more of the techniques and approaches listed in Annex 1[2] and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”

In turn Annex 1 of the Regulation refers to a range of theories, techniques and approaches used in the field of AI, including some which are now generally regarded as outmoded. Expect a lively debate during the consultation phase of the Regulation on the scope of Annex 1.

Territorial scope

The Regulation has extra-territorial scope, somewhat akin to the GDPR. It would apply to:

  • Providers who place AI systems on the market or put them into service in the EU;
  • Users of AI systems located in the EU;
  • Providers and Users of AI systems located in third countries, where the outputs of the AI system are used in the EU.
Overview of the Regulation

There are two principal strands to the new regulatory framework put forward in the Regulation: first, outright bans on certain categories of AI system; and second, a system of safety standards and pre-marketing conformity assessment against those standards for AI systems which are regarded as “high risk”.

Notably, AI systems for military purposes are not within the scope of the Regulation

Banned AI practices

The Regulation proposes an outright ban on the following AI practices on the basis that the Commission regards them as incompatible with European values, rights and freedoms:

  1. AI systems that deploy subliminal techniques in order to materially distort a person’s behaviour in a manner that causes or is likely to cause physical or psychological harm;
  2. AI systems that exploit vulnerabilities of specific population groups based on their age, physical or mental disability, in order to materially distort individual behaviour in a way that causes or is likely to cause physical or psychological harm;
  3. AI systems used by public authorities to evaluate or classify the trustworthiness of people based on their behaviour or personal characteristics, with the social score leading to:
    1. detrimental or unfavourable treatment of individuals or groups in contexts which are unrelated to the contexts in which the data was originally generated or collected;
    2. detrimental or unfavourable treatment of individuals or groups that is unjustified or disproportionate to their social behaviour or its gravity
  4. the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, except in a few specified scenarios, and subject to strict conditions.

At first blush, practices 3 and 4 seem somewhat straightforward, but the wording of 1 and 2, and especially 1, seems likely to attract attention from industry and other stakeholders as the Regulation moves through the legislative approval process. In a speech introducing the Regulation by Margrethe Vestager, she gave the example of a toy that uses voice assistance to manipulate a child into doing something dangerous, as a subliminal technique that would fall into category 1.

“High Risk” AI systems

The Regulation classifies AI Systems as high risk into two categories as follows:

  1. an AI System that is intended to be used as a safety component of a product, or is itself a product, covered by the EU legislation listed in Annex 2 of the Regulation; and the safety component, or the product, is required to undergo third party conformity assessment under that legislation before it can be placed on the market; or
  2. an AI system falls within one of the categories of AI system listed in Annex 3 of the Regulation.

Annex 2 is a list of product safety and related market surveillance legislation, including the Medical Device Regulation, the In Vitro Diagnostic Regulation, and the Motor Vehicles Approval and Market Surveillance Regulation. A product manufacturer who places a high risk AI System onto the market in a product covered by any of the Annex 2 legislation, will be obliged to comply with the Regulation in respect of the AI System in, or that comprises, the product.

The “high risk” AI systems listed in Annex 3 include:

  • AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons;
  • AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity.
  • AI systems intended to be used for the purpose of assessing students in educational and vocational training institutions and for assessing participants in tests commonly required for admission to educational institutions.
  • AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests.

The Commission has the right to review the list in Annex 3 list periodically,

Safety requirements for high risk AI systems

The Regulation then propose a detailed standards and transparency regime, setting out the following standards and obligations that will need to be met in relation to high risk AI Systems:

  • the establishment and continuous maintenance of a risk management system throughout the lifecycle of the AI system;
  • data and data governance, to ensure data used for training, testing and validation of the AI system is of high quality, free from bias, and used in accordance with data protection legislation;
  • technical documentation explaining the development and functioning of the AI system;
  • an obligation to create and maintain logs to monitor the functioning of the AI system in use;
  • transparency and provision of information to users, including instructions for use;
  • human oversight, meaning design and development of the AI system in such a way that it can be effectively supervised by a human being while in use;
  • accuracy, robustness and cybersecurity.

This standards regime will introduce substantial regulatory compliance obligations onto the development and operation of products and services caught by the Regulation. The concept of a risk management system, in particular, whilst familiar to manufacturers already regulated by the product safety legislation in Annex 2, will mean a whole new level of regulatory compliance for, for example, software developers hitherto untouched by the EU’s product safety laws.

Obligations on economic operators in relation to high risk AI systems

Obligations in relation to high risk AI systems created by the Regulation will be spread across the economic operators in the AI supply chain, namely users, providers, importers, distributors, and authorised representatives.

The bulk of the obligations fall on providers of an AI system, which are defined as “a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge”.

Broadly, a provider will need to:

  • Ensure their AI system complies with the safety requirements set out in the Regulation, which are briefly summarised above;
  • Put in place a quality management system to ensure compliance with the Regulation, which must be documented in the form of written policies, procedures and instructions and contain at least all of the information set out in Article 17 of the Regulation;
  • Draw up the technical documentation for the AI system and maintain the logs automatically generated by it when the AI system is under their control;
  • Ensure the AI system undergoes the appropriate conformity assessment procedure before it is placed on the market, have the AI system CE marked, and draw up an EU declaration of conformity in respect of the AI system;
  • Withdraw, recall, or take corrective actions in respect of their AI system if they have reason to believe it is not in conformity with the Regulation;
  • Communicate as required with supervisory authorities, including registering the AI system in the EU database created by the Regulation.

“Authorised representatives” are defined as “any natural or legal person established in the Union who has received a written mandate from a provider of an AI system to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation”. An authorised representative can be appointed by a provider established outside the EU to keep a copy of the EU declaration of conformity and technical documentation at the disposal of supervisory authorities and to cooperate with any requests from supervisory authorities in relation to the AI system.

“Importers” are defined as “any natural or legal person established in the Union that places on the market or puts into service an AI system that bears the name or trademark of a natural or legal person established outside the Union”.

Before placing an AI system on the market in the EU, an importer must ensure that the system has been conformity assessed by the provider, that technical documentation has been drawn up, and that the system bears the required conformity marking and is accompanied by all necessary information and documentation to comply with the Regulation. They must also ensure that the AI system is accompanied by their name and contact address, ensure the system is stored and transported in such a way that the safety requirements summarised above are not jeopardised, and cooperate with any requests from supervisory authorities in relation to the AI system.

“Distributors” are defined as “any natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market without affecting its properties”. They have the same obligations as an importer, but they are additionally required to withdraw, recall or take corrective action in respect of AI systems which they have reason to believe are not in conformity with the Regulation, in the same manner as a provider.

“Users” are defined as “any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity”. The Regulation places various miscellaneous obligations on them in relation to the manner in which high risk AI systems are put to use, such as the transparency obligations which are briefly mentioned under section 3 below.

Any party which places on the market or puts into service a high risk AI system under their own name or trademark, modifies the intended purpose of a high risk AI system which is already on the market or in service, or substantially modifies a high risk AI system, shall be deemed to be a provider for the purposes of the Regulation.

Conformity assessment

AI systems which are regarded as “high risk” by the Regulation will need to undergo conformity assessment before they can be placed on the market in the EU.

A conformity assessment is a procedure in which the technical documentation and quality management system prepared by the provider are assessed to confirm that the AI system conforms with the requirements of the Regulation. This is a concept taken from the EU product safety legislation, for example the Medical Device Regulation.

Conformity assessment can either take the form of a self-assessment by the provider themselves or an assessment by an expert third party notified body.

The precise nature of the conformity assessment procedure required by the Regulation depends on the type of high risk AI system in question. Only AI Systems used for biometric identification will need to undergo conformity assessment by a notified body. All other high risk AI systems listed in Annex 3 can undergo conformity self-assessment by the provider. High risk AI systems subject to the EU product safety legislation must undergo the conformity assessment procedures set out in that legislation. Those conformity assessment procedures must also verify that the high risk AI system meets the safety requirements of the Regulation.

If a high risk AI system is substantially modified after it has been conformity assessed, it must be conformity assessed again. The Regulation contains specific provisions to assess when a “substantial modification” has been made to a high risk AI systems that continues to learn after it has been put into service.

Notified bodies

In support of the conformity assessment system for high risk AI systems, the Regulation makes provision for the certification of notified bodies. Notified bodies are expert organisations designated by Member State authorities which are responsible for conducting conformity assessments in accordance with the requirements of the Regulation. Assessment by a notified body is intended to be more rigorous and impartial than self-assessment, and therefore more appropriate for the most high risk AI systems.

Transparency obligations for certain types of AI system

In addition to the various obligations outlined above which are designed to ensure that AI systems are built in a way which ensures their safety, the Regulation imposes some specific obligations to protect consumers from being ‘tricked’ by AI that can mimic human voice or appearance.

Providers of AI systems will need to ensure that AI systems intended to interact with people are designed and developed in such a way that those people are informed that they are interacting with an AI system, unless this is obvious from the circumstances in which the system is used.

Further, users of emotion recognition systems or a biometric categorisation system will be required to inform people exposed to these systems that they are in use. Users of AI systems for the generation of convincing ‘deep fake’ content will be required to disclose that the content has been artificially generated or manipulated.

New institutions

The Regulation will also set up, or require Member States to set up;

  1. a European Artificial Intelligence Board composed of one representative from each Member State, the Commission and the European Data Protection Supervisor, which will supervise the application of the Regulation across the EU, collate best practice and advise EU institutions on questions relating to AI
  2. Regulatory sandboxing schemes to facilitate the development, testing and validation of innovative AI systems in a controlled environment, to which SMEs will have priority access
  3. A database of the high-risk AI systems registered for use in the EU, which shall be publicly accessible
  4. National competent authorities to ensure the application and implementation of the Regulation, and national supervisory authorities responsible for market surveillance and designation of notified bodies.
Penalties

The Regulation includes a penalty regime along the lines of the GDPR, with fines of:

  • €30m or 6% of global turnover for a breach relating to prohibited AI Systems, or a breach of the training, validation and test dataset obligations in Article 10.
  • €20m or 4% of global turnover for a breach of any other obligations in the Regulation.
  • €10m or 2% of global turnover for providing incomplete, incorrect or misleading information to regulatory authorities or notified bodies.

These are eye watering numbers. No doubt the Commission will come under pressure to justify why setting administrative fines at this level is proportionate or necessary.

Conclusion

For anyone looking to develop, operate or use an AI-powered product or system in the EU that could conceivably fall into one of the Regulation’s “high-risk” categories, the Regulation is a game changer.

Those most dramatically affected are likely to be companies not currently regulated by any of the EU’s product safety legislation, for whom the introduction of a product safety-type compliance framework will be a profound change.

Whether the Commission has struck the right balance between regulating for product safety and encouraging innovation in AI in the EU remains to be seen, but as the first EU law focussed on AI, the Regulation will have a lasting impact on regulators and policy makers around the world.

Bristows Life Science Summit 2021

Following on from the success of our previous Bristows Life Sciences Summit on gene editing, we will be exploring the use of artificial intelligence in the medical sphere in another big debate in November 2021.

Keep an eye out on our events page for further details, and register your interest here.

———————
[1] https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-european-approach-artificial-intelligence
[2] These techniques and approaches are: (a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; (b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; (c) Statistical approaches, Bayesian estimation, search and optimization methods.