The European Commission unveils a new set of proposals to regulate Artificial Intelligence

News type
Legal news
Author(s)

In recent years, the fast-evolving use of Artificial Intelligence (“AI”) has been a hot topic and has raised new legal issues in terms of data protection, competition and liability. On 21 April 2021, following its publication of a White Paper on AI in 2020, the European Commission (“the Commission”) unveiled a new legal framework to regulate the use of AI in the European Union (“the Proposed Regulation”).

The Commission is putting forward the Proposed Regulation with the following specific objectives: (i) ensure that AI systems placed on the market are safe and in conformity with fundamental rights and Union values; (ii) ensure legal certainty to facilitate investment and innovation in AI; (iii) enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems; and (iv) facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.

The Proposed Regulation follows a risk-based approach whereby the use of AI systems (broadly defined by reference to three common programming techniques) is categorised according to unacceptable, high, and low or minimal risk caused to human safety and fundamental rights.

In line with the EU’s General Data Protection Regulation, the Proposed Regulation is intended to have an extra-territorial application, as it would apply to all providers of AI systems, with the “provider” defined as the person placing on the market or putting into service AI systems in the EU, irrespective of whether that person is established within the EU or not. The framework also imposes requirements on certain importers, distributors and users of AI systems.

Unacceptable risk - The Proposed Regulation prohibits certain AI systems that have a significant potential to manipulate persons through subliminal techniques beyond their consciousness or exploit their vulnerabilities in a manner that is likely to cause them psychological or physical harm. AI systems for social scoring by public authorities and the use of “real time” remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement are also prohibited unless certain limited exceptions apply.

High risk - The majority of the Proposed Regulation focuses on high-risk AI systems which include, among others, systems used as safety components in certain products such as robotics, toys and medical devices and systems intended to be used for biometric identification, operation of critical infrastructure, employment or to assess creditworthiness. This list could be extended by the Commission, entitled to identify new forms of high-risk AI systems over time.

Providers of high-risk AI systems will have to comply with a strict set of horizontal mandatory requirements for trustworthy AI and follow conformity assessment procedures before those systems can be placed on the market. Such high-risk AI systems will also have to meet requirements in terms of transparency to users, documentation, human control and record keeping.

Undertakings violating AI general prohibitions could face considerable fines, namely up to 6% of their annual worldwide turnover or EUR 30 million (whichever is the higher). Member States will also have to determine sanctions for other infringements of the Proposed Regulation.

Each Member State will appoint a national regulator, and a new European Artificial Intelligence Board will be set up at EU level.

The Proposal will now probably undergo intense discussions in the European Council and the European Parliament before being adopted and becoming EU law. It could in fact take several years before it will be directly applicable, one to two years after its adoption. This implies that this new set of obligations is not likely to apply before 2024.

Please contact Karel Janssens for further information and/or for general legal advice relating to artificial intelligence.
 

Practice areas

Subscribe to our newsletter

By clicking on subscribe, you agree with our use of your personal data in accordance with our Privacy and Cookie Policy. Please note that you can always unsubscribe by clicking the unsubscribe link in the footer of our e-mails.