
European AI Regulation
The principles for interpreting the requirements for high-risk AI systems

Yannick Caballero Cuevas
(Translated by DeepL)
The European AI Regulation (AI Act), designed in particular as legislation governing the safety of AI systems (AIS), imposes requirements that any high-risk AIS must meet before it is placed on the market or put into service in the European Union, throughout its life cycle. In the banking and financial sector, the AI Act considers credit scoring AIS to be high risk. The requirements are set out in Articles 8 to 15 AI Act and relate in particular to the risk management system (Article 9 AI Act), data governance (Article 10 AI Act), human control (Article 14 AI Act), and the accuracy, robustness and cybersecurity of the AIS (Article 15 AI Act). This commentary addresses the principles of interpretation set out in art. 8 AI Act. It should be recalled here that the AI Act may have extraterritorial scope (cf. Fischer, cdbf.ch/1397/).
Art. 8 para. 1 AI Act provides that “High-risk AI systems shall comply with the requirements laid down in [section 2 of chapter 3], taking into account their intended purpose as well as the generally acknowledged state of the art on AI and AI-related technologies. The risk management system referred to in Article 9 shall be taken into account when ensuring compliance with those requirements” (emphasis added). The provision then reiterates that any high-risk AIS must meet the mandatory requirements set out in Articles 9 to 15 of the AI Act. The measures taken by the supplier to comply with the requirements must take into account the context in which the AIS operates and be proportionate to the objectives of the AI Act (recital 64 AI Act). Consequently, the principle of proportionality should guide the examination of the AIS’s compliance, taking into account its intended use and good IA practice.
Firstly, the ‘intended purpose’ referred to in Art. 8 para. 1 AI Act is defined in Art. 3(12) AI Act. This concept includes “the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation”. In other words, the intended purpose refers to the main use for which the supplier has designed the AIS, as specified in the documents (e.g. user manual, advertising, technical documentation) that it makes available, as well as in its statements. The question may arise as to whether – when determining the intended purpose of a high-risk AIS – the supplier must take into account reasonably foreseeable misuse. In our opinion, the notion of intended purpose should refer to the use intended by the supplier, without taking into account any misuse that does not fall within the main purpose for which the AIS was developed (in this sense, cf. Schneeberger et al, Art. 8, p. 226-227). The concrete implementation of the requirements therefore depends on the intended purpose of the supplier, and not on any possible misuse that third parties might make of it, because the supplier ultimately has only limited control in this respect. However, the supplier must ensure a sufficient level of protection corresponding to the envisaged use (cf. Blue Guide, point 2.8). In our opinion, this protection should be ensured by implementing the requirements of the AI Act.
Secondly, the application of the requirements and the conformity assessment must necessarily take into account the generally recognised technical state of the art in the field of AI. The latter refers to a level of development achieved in terms of technical capabilities at a given period, with regard to products, processes and services, based on relevant consolidated knowledge in science, technology and experience, and which is recognised as good practice in the technological field (cf. Draft request for standardisation on AI – amendment to decision C(2023)3215, appendix, p. 1). The generally recognised state of the art should therefore be understood as the body of well-established and recognised best practices in the field of AI. However, these rules should not include the latest scientific research that is still at the experimental stage or has not yet reached sufficient technological maturity.
Art. 8 para. 2 AI Act describes the role of the AI Act in the European legal landscape as well as its interaction with other harmonisation legislations. In this respect, the compliance examination can be integrated into the already existing procedures provided for by European legislations in force, listed in section A of appendix I. The aim is thus to avoid additional regulatory burdens and procedural duplication, while ensuring consistency in the application of the European legal framework (recital 46).
In conclusion, Art. 8 AI Act provides guidelines on the implementation and interpretation of requirements for high-risk AIS, in which the purpose of the AIS and the rules of the art play a central role. Furthermore, the requirements of the AI Act should not be interpreted in a static manner, but in an evolutionary manner, according to technological advances in the field of AI. This aspect should encourage continuous improvement of AIS. In our opinion, the implementation and interpretation of the requirements taking into account the context should offer ‘relative’ flexibility to the AI Act.