Skip to main content

Define roles

Who is a deployer within the meaning of the European AI Regulation ?

(Translated by DeepL)

The European Regulation on Artificial Intelligence (AI Act) sets out specific obligations for the various actors involved in the different stages of the development, operation and use of an AI Act -covered tool.

The two most important ‘roles’ in relation to these obligations are those of “provider” and ‘deployer’. It is therefore important to determine the role that each company plays in relation to artificial intelligence systems (AIS) or general-purpose AI models (GPAIM) (on the concepts of AIS and GPAIM, see Caballero Cuevas, cdbf.ch/1382) that it uses or develops, as this role determines the obligations applicable under the AI Act.

This commentary focuses on the concept of deployer within the meaning of the AI Act (concerning the concept of “provider”, see Fischer, cdbf.ch/1418).

  1. Who is considered a deployer ?

A deployer is ‘a natural or legal person, public authority, agency or other body using an AI system under its own authority’ (Article 3(4) AI Act). Persons who use an AIS solely for personal and non-professional purposes are exempt.

In our view, the criterion ‘under its own authority’ makes it possible to distinguish between (i) the use of an AIS, for example in relations with users (employees, customers), and (ii) the mere ‘enjoyment’ of an AIS and its results from the use of an AIS. Consequently, only those who have a certain degree of control over the AIS will be classified as deployers. For example, a bank customer who uses a customer service chatbot on a website does not have this level of control. The user can ask questions, but it is the bank that controls how the chatbot responds and will therefore likely be classified as the deployer.

  1. What are the practical consequences of the role of deployer  ?

The obligations of deployers in relation to high-risk AIS are discussed separately (see Caballero Cuevas, cdbf.ch/1406).

With regard to other AIS, the main obligations of deployers relate to transparency and can be summarised as follows (Art. 50 AI Act) :

‒ Data subjects must be informed when these AIS use personal data to detect emotions or intentions or to classify them on the basis of biometric characteristics.

‒ Deepfakes that have been generated must be recognisable as such ; however, there is an exception for creators : if deepfakes are clearly part of art, literature, satire, etc., the indication may be limited so as not to impair the representation and enjoyment of the work.

‒ Published texts on topics of public interest must indicate that they have been generated by AI or manipulated, unless the text has been verified by a human being and a human being or legal entity has assumed editorial responsibility for its publication.

  1. What are the practical consequences for Swiss companies  ?

The majority of Swiss companies, insofar as the AI Act applies (see Fischer, cdbf.ch/1397), should play a role as deployers. As such, their main responsibility is to comply with the transparency obligations imposed on deployers. In practice, we believe that three approaches can be considered :

Permanent display in the user interface, for example by means of a visible icon or label indicating that the user is interacting with an AIS. This method is suitable when the use of AI is constant and integrated into the service.

Contextual display, such as an information window that appears when interaction with AI begins or when AI-generated content is viewed. This solution limits information overload while ensuring targeted transparency.

‒ An ‘AI Statement’, for example in the form of a dedicated section on a website or in the terms of use, in which more detailed explanations are provided. This method is useful to complement the first two, but is not sufficient on its own.

The information must be provided in an understandable, proportionate and accessible form. Purely formal or technical transparency is not sufficient : it is important to enable data subjects to understand that they are interacting with AI, what the implications are and, in some cases, what their rights are.