Skip to main content

Artificial Intelligence Systems

Categories of European regulation

(Translated by DeepL)

The European Regulation on Artificial Intelligence (AI Act) adopts an approach based on the risks that an artificial intelligence system (AIS, cf. Caballero Cuevas, cdbf.ch/1382) may pose to the health, safety and fundamental rights of individuals. AIS are divided into four categories, respectively AIS presenting an unacceptable risk, AIS presenting a high risk, AIS presenting a limited risk and AIS presenting a minimal risk. This commentary focuses on the first three categories of AIS.

A. AIS presenting an unacceptable risk

Art. 5 AI Act prohibits AIS whose placing on the market, commissioning or use presents an unacceptable risk. Paragraph 1 provides an exhaustive list of prohibited practices. Examples include :

  • social credit AIS (let. c) ;
  • AIS assessing the risk of a person committing a criminal offence (let. d) ;
  • AIS inferring the emotions of a natural person in the workplace and in educational establishments (let. f) ;
  • AIS for biometric individual categorisation which make it possible to deduce the race, political opinion, trade union membership or sexual orientation of a natural person (cf. letter g) ;
  • AIS for real-time biometric surveillance in public spaces for law enforcement purposes (let. h) ;
  • AIS designed to manipulate human behaviour in order to deprive individuals of their free will (letter a).

Despite the formal ban, the AI Act provides for backdoors which, under certain conditions, allow the use of AIS presenting an unacceptable risk. This is particularly the case for real-time biometric surveillance AIS, when they are used, for example, to search for victims of kidnapping, human trafficking or sexual exploitation, or to prevent a specific threat such as a terrorist attack. The conditions for their use are set out in paragraphs 2 et seq. of art. 5 AI Act.

At present, no AIS used in the banking and financial sector presents unacceptable risks. In practice, therefore, art. 5 AI Act should have no impact on banks and financial institutions.

B. High-risk AIS

The regulatory requirements of the AI Act apply essentially to high-risk AIS (art. 8 to 15 AI Act). Art. 6 par. 1 AI Act stipulates that an AIS must be considered high-risk when the following two conditions are cumulatively met :

Paragraph 2 refers to Annex 3 AI Act, which provides an exhaustive list of high-risk applications. In the banking and financial sector, particular attention should be paid to the following AIS :

  • AIS used in the selection or recruitment of candidates (cf. Hirsch, cdbf.ch/1384) ;
  • AIS used in decision-making on conditions of employment, promotion or dismissal ;
  • AIS used to assess the creditworthiness of individuals or to establish their credit rating (credit scoring) ; and
  • AIS used to assess risks and pricing for individuals in life and health insurance.

Once again, the AI Act provides for backdoors. In fact, an AIS listed in Annex 3 of the AI Act may be considered not to be high-risk if it does not present “a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making” (cf. art. 6 par. 3 AI Act). The assessment of the risk is left to the AIS provider. The latter must therefore document its assessment in detail before placing it on the market or putting it into service if it considers that such an AIS is not high-risk in accordance with art. 6 par. 3 AI Act. By 2 February 2026, the European Commission will have to provide guidelines specifying the implementation of art. 6 AI Act.

The AIS considered to be high-risk may evolve in the future in line with technological advances and their uses. To this end, art. 7 AI Act provides that the Commission is authorised to amend Appendix 3 AI Act.

AIS presenting limited risk

The AI Act does not define limited-risk AIS. However, it does introduce transparency requirements (cf. art. 50 AI Act) for certain types of AIS that are neither AIS presenting an unacceptable risk nor high-risk AIS. These requirements apply in particular to AIS intended to interact directly with natural persons, those intended to generate synthetic content of the audio, image, video or text type, or those intended to generate or manipulate images or audio or video content constituting hypertrucage (deepfakes). In our view, this category is primarily aimed at conversational agents and generative AI applications.

The transparency requirements are mainly aimed at informing users that this is a conversation with an AI, and at designating the content generated by the AI as such.

The qualification of an AIS as presenting a limited risk does not exempt it from analysing whether the AI model it uses is a general-purpose AI model (cf. Caballero Cuevas, cdbf.ch/1382). Thus, the supplier of a conversational agent using a general-purpose AI model should in particular be required to comply with the transparency requirements (cf. art. 50 AI Act), as well as those relating to general-purpose AI models (cf. art. 53 AI Act, art. 55 AI Act).

In our view, banks and financial institutions could be required to comply with the transparency requirements set out in art. 50 AI Act, depending on their qualification as AIS providers or deployers.