The EU’s proposed AI Regulation
A sliding scale based on risk
The AI Regulation focuses on when, where and how AI is developed, marketed, and used.
Every application and use case of AI will fall into one of four different risk categories: unacceptable, high, limited and minimal, with differing degrees of regulation applying to each. The higher the risk, the stricter the requirements.
The AI Regulation also contains future-proofing provisions allowing for additions and/or expansions to these categories, and the examples contained within them, to cover emerging uses and applications of AI.
“Unacceptable risk” – prohibited
There is an outright prohibition on certain AI systems which the Commission deems to pose an unacceptable level of risk, i.e. which are assumed to be particularly harmful and to contradict values of respect for human dignity, freedom, equality, democracy and the rule of law and EU fundamental rights. This is an exhaustive list which, in summary, focuses on:
-
AI systems that distort human behaviour in a manner causing physical or psychological harm by deploying subliminal techniques or by exploiting vulnerabilities due to the person’s age or physical or mental disability.
-
AI-based social scoring systems deployed by public authorities leading to the detriment of individuals or groups of people.
-
AI systems used for real-time remote biometric identification in publicly accessible spaces for the purpose of law enforcement, subject to a number of exceptions.
While the latter two categories are focussed on public authorities and law enforcement, the first category is more relevant to private actors’ own interactions with end-users. Certain AI applications could be said to involve “deploying subliminal techniques” to influence human behaviour. The limiting factor set by the AI Regulation is the causation of physical or psychological harm, but this is not explicitly defined in the regulation. If that remains the case in the final regulation, what amounts to psychological harm may be a key battleground in any future AI disputes.
“High-risk” – strictly regulated
High-risk AI systems are those which affect human health and safety, or which involve the use of intrusive data analytics and profiling that could affect fundamental rights. Such systems will need to undergo a conformity assessment procedure to verify that they comply with a set of specifically designed high-risk requirements (explained below), following which a written declaration of conformity must be drawn up, a CE mark applied and the AI system registered in a new EU database to be set up by the Commission.
High-risk AI systems are split into two categories:
-
Safety products/components. This covers AI systems which are used as safety components in products (or are themselves products) that are already subject to existing conformity assessment systems under specific EU harmonisation legislation. For example, existing EU regulations require cars, medical devices and industrial machinery to be assessed for conformity with various essential safety requirements before they can be placed on the market. Once the AI Regulation comes into force (along with any relevant secondary legislation, e.g. implementing acts specific to automated vehicles), these existing conformity assessments will also include compliance with the high-risk requirements described below. This would cover, for example, the use of machine learning in an autonomous vehicle or an AI-enabled pacemaker.
-
Specific uses of AI in sensitive sectors with fundamental rights implications. This covers AI systems which are not used in situations that are already subject to EU harmonisation legislation, as above, and which fall into one of eight areas listed in Annex 3 to the regulation.
High-risk applications (Annex 3)
-
AI used for biometric identification and categorisation of natural persons
-
AI used as safety components in the management and operation of critical infrastructure, such as the supply of utilities
-
AI used for determining access to, and assessments in, educational and vocational training
-
AI used in employment, workers management and access to self-employment, including the use in recruitment, task allocation or monitoring and evaluating performance
-
AI used to evaluate the creditworthiness of individuals or their credit score, or in certain other manners that determines access to and enjoyment of essential private services and public services and benefits
-
AI used in law enforcement for individual risk assessments or as polygraphs
-
AI used for assessing security risks in migration, asylum and border control management
-
AI used to assist a judicial authority in the administration of justice and democratic processes
Products or services which fall into these use cases will be subject to self-assessment conformity obligations to confirm compliance with the high-risk requirements described below, with the exception of systems for biometric identification and categorisation of natural persons, which will be subject to conformity assessment by an external testing body.
Because these high-risk requirements are wide-ranging (see below), conformity assessments of any kind will impose significant burdens on those who develop, market or use AI applications falling into either category. The impact may vary by sector and use case. Systems for the management of critical infrastructure are already tightly regulated and controlled, and those responsible for them will be used to operating within a complex regulatory framework. In contrast, providers and users of biometric scanners or recruitment software may find these changes more demanding. The AI Regulation seems particularly keen to tighten protections around algorithmic bias and discrimination in the work place, and performance management algorithms – such as those found to be discriminatory to certain categories of riders in a recent ruling by an Italian tribunal concerning an algorithm used by a food delivery platform – would be treated as high-risk applications.
At the same time, AI developers will welcome the Commission’s stance that only self-assessment conformity is required for most high-risk AI systems, saving them from having to disclose their algorithms and underlying training data to external testing bodies for review, thereby ensuring that intellectual property protections and trade secrets for those assets are not compromised.
“Limited risk” – enhanced transparency
The AI Regulation identifies three categories of AI systems which, while not necessarily “high-risk”, will need to fulfil requirements in terms of transparency:
-
AI systems that interact with natural persons will need to be designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and context.
-
Emotion recognition or biometric categorisation systems must inform end-users that they are exposed to such a system.
-
AI systems that generate or manipulate content to resemble existing persons, objects, places or other entities or events, so that the content would falsely appear to a person to be authentic (i.e. a ‘deep fake’), must disclose that the content has been artificially generated or manipulated.
These applications do not, however, need to comply with the high-risk requirements (below) or undergo conformity assessment, unless they separately constitute high-risk applications (e.g. an AI system with which employees interact in order to obtain access to vocational training).
“Minimal risk” – no additional restrictions
The EU expects that the “vast majority” of AI technology will fall into the minimal-risk category, which is free to develop and use with no restrictions on top of any relevant existing legislation (this category is not formally listed in the legislative proposal but is detailed in the Commission’s Q&As published alongside the proposed AI Regulation). No conformity assessment is required for such technology. Examples would include email spam filters and mapping products used for route planning.
At the same time, the Commission and the newly formed European Artificial Intelligence Board (see below) will encourage and facilitate the drawing up of codes of conduct intended to foster the voluntary application of the mandatory requirements for high-risk AI systems even for those AI systems that do not fall within the high-risk category.
Authors
Giles Pratt Partner
London
Dr. Christoph Werkmeister Partner
Düsseldorf
Sascha Schubert Partner
Brüssel
Satya Staes Polet Partner
Brüssel
Ben Cohen Senior Associate
London