top of page
Mario Tokarz

The EU's new Artificial Intelligence Act. An overview of how it might impact development.

Updated: May 13, 2022


Cover photo by Scott Graham on Unsplashed
Cover photo by Scott Graham on Unsplashed

The European Union has introduced a regulation for Artificial Intelligence, the Artificial Intelligence Act (AIA) [1] . The legislation has been introduced by the European Commission and is being deliberated. This article will give a short introduction to the AIA, then focus primarily on mapping its provisions to the development cycle of an ML product. Note that the details of the law might change, so I will highlight the topics that stuck out to me from a technical (not legal) point of view. If you are looking for additional summaries or overviews, there are some good overall summaries on where things stand today (e.g. [2]).


Let's get going.


The AIA Basics

To start with, there are some basics worth pointing out.


Where does the AIA apply?

The AIA applies where AI systems are placed in the EU market or users of the AI system are located within the EU. There are some fields of application with more specific regulations (e.g. civil aviation) where the AIA only applies in part. Some fields of application are not subject to the AIA at all (e.g. military applications).


What is an AI system?

The definition of an AI system within the AIA is rather broad. Of course, it includes machine learning approaches (such as supervised and unsupervised learning). However, logic-based approaches (e.g. inductive logic programming) and statistical approaches (e.g. Bayesian estimation) are also considered AI systems.


What are the risk classes used in the AIA?

AIA classifies AI systems according to the risks that they pose (risk-based approach). There are 4 main risk classes, summarised in the graphic below.


Source: EU commission [3]
Source: EU commission [3]


Unacceptable risk AI practices include social scoring by public authorities and 'real-time' remote biometric identification systems. 'Real-time' remote biometric identification systems are basically allowing for people to be identified live via cameras. These uses are generally prohibited. Note that there is a range of possible exceptions here for law enforcement.


In addition, AI systems that might distort people's behavior and cause physical or psychological harm to that person or another person are prohibited.


High Risk AI systems are at the core of the AIA and the rules described in the following section mainly apply to them. There is a list of applications spelled out in the AIA. Those applications include Biometric identification, management of critical infrastructure (e.g. water supplies), AI use in education (e.g. university admission), AI use by public authorities (e.g. deciding on social benefits or asylum permits, aiding law enforcement), AI used for financial scoring and more (refer to [3] for a more complete list).


In addition, certain AI systems which are safety components are considered high-risk AI systems.

Where limited Risk applications are deployed, the users have some "transparency rights". Such rights include being informed when they interact with an AI system (think of chatbots here), when emotion recognition systems are used and when media has been generated by AI (deep fakes).

Minimal risk AI systems are all other AI system. Most of the rules for the development and certification described in the following section thus do not apply to them. However the AIA encourages the setup of voluntary codes of conduct for the development of such systems.

In all cases, exceptions can apply so take it with a grain of salt for now.


The lifecycle of high-risk AI systems

Understanding the AIA and its requirements for high-risk AI systems is simpler when looking at it in the context of the product lifecycle, so I extended the simple but very useful lifecycle model used by Andrew Ng (see [4]).


The steps he outlines in the video are marked in green below. To keep it simple, I have split the lifecycle into three phases. Note that I will limit myself to giving you an overview of which areas are touched by the AIA. Here it is important to point out that the AIA is aims to impose requirements without giving concrete guidance on how a problem could or should be solved in practice. The AIA's crafters propose a mechanism of harmonized standards to be established separately. I will touch briefly on this in the summary below.


The development phase starts with scoping of the project. This will become a very important part of AI projects and governance as the definition of use case, target user group and the target metrics for proper functionality are needed for the later assessment. One could also say that it will form the basis to explain why a certain solution was deemed good enough to go to market.

As the project enters an iterative cycle of data collection and training, the AIA formalizes requirements that impact governance and functionality. To give one example: For data governance the AIA spells out that the data sets used shall be 'relevant, representative, free of errors and complete'. As already mentioned above, these are merely abstract non-functional requirements which will need to be filled with technical meaning.


Additionally, some requirements such as the need for effective human oversight might require models to produce understandable reasoning and (when oversight might be needed in real-time) ways for a human supervisor to interact with a running system. This will directly translate into functional requirements (What explanation created by the system is needed/good enough to allow for human oversight? What is the UI to interact with the running system?)


The assessment phase is a mandatory step for high-risk AI covered by the AIA. Put simply, companies will need to produce technical documentation for their ready-to-use system. The technical documentation's main content is defined within the AIA. Besides a general description of the system (purpose, hardware environment, version etc.) it will need to contain technical details of how the system was created (methods and steps performed, usage of pre-trained models, design spec with key design choices, architecture, data sheets etc.) and how it needs to be operated. Based on the technical documentation, conformity of the system with the AIA needs to be confirmed (in-house or by an external auditor, depending on the system).


The system will then get a CE marking and is ready to go to market. Of course, the assessment will need to be repeated when the system is updated. Issues to certify a system might throw you back into any phase of development. In the figure, I have indicated this by just one orange arrow from AIA Conformity Assessment to scoping.


The market phase represents the system's lifetime in production. A main requirement here is that the system needs to be monitored for model drift (degradation of performance over time) and effective oversight needs to be ensured. I will not cover it today, but specifically model drift is a real issue that is hard to solve, as it can require retraining and updating a model. In turn this might require an re-assessment of the system. So, as simple as it is to write it down, closing the loop in practice will be one of the major challenges.


Finally, let me mention the need to have end-to-end quality and risk management systems in place. This reminded me personally of ISO 9001 and the challenges of rolling it out. I presume that it will be crucial to see which standards will be deemed sufficient to fulfil these requirements.


Summary

In this article, I tried to give a very brief introduction to the EU's proposed Artificial Intelligence Act. Specifically, I mapped some of the provisions to the development phases of an ML project to outline the challenges and possible requirements that might stem from the AIA from a practical point of view. Aspects such as data governance, testing strategy etc. are complex matters which merit articles (or books) by themselves. I might be exploring some of these fields in future articles.

In addition, there are some mechanisms foreseen in the AIA to lower burdens on companies which were not covered today. One of those mechanisms is relying on harmonized standards which, when applied, will be considered compliant with the AIA. Such standards are subject of discussion at the moment on multiple levels. There is also a sandbox scheme which shall make it easier for e.g. startups to innovate. The details of these schemes are largely left to the national authorities meaning we will need to wait for more information to see which companies can benefit and how that benefit might look like.


As things stand today, the AIA will have a large impact on companies with compliance costs in 2025 estimated between €500 Million to € 10 Billion. The maximum penalties for violations will be 4% of global annual turnover or €20 Million (whichever is higher).


What are your thoughts about the AIA and how it will impact your industry? Is your company already making plans to comply with the new regulations once they are official? I would love to hear your thoughts in the comments, here or on LinkedIn. If you are a company looking to begin the compliance process, RightMinded AI can help. Reach out to us to start the conversation.


Sources

[1] Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS COM/2021/206 final. Last retrieved Sept 7, 2021 https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206

[2] The Fight to Define When AI Is ‘High Risk’. Khari Johnson. Retrieved Sept 7, 2021, from https://www.wired.com/story/fight-to-define-when-ai-is-high-risk/

[3] Regulatory framework proposal on Artificial Intelligence. Last retrieved Sept 7, 2021, from https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

[4] A chat with Andrew on MLOps: From Model-centric to Data-centric AI. Last retrieved Sept 7, 2021, from https://youtu.be/06-AZXmwHjo





110 views
bottom of page