Skip to main content

B-04-49 FINMA Guidance 08/2024

Use of Artificial Intelligence

Articles en relation

European AI Regulation

The principles for interpreting the requirements for high-risk AI systems

The European AI Regulation (AI Act), designed in particular as legislation governing the safety of AI systems (AIS), imposes requirements that any high-risk AIS must meet before it is placed on the market or put into service in the European Union, throughout its life cycle. In the banking and financial sector, the AI Act considers credit scoring AIS to be high risk. The requirements are set out in Articles 8 to 15 AI Act and relate in particular to the[...]

Automated individual decision

The credit scoring company must not disclose its algorithm, but must explain it

The credit scoring company must explain to the person concerned the procedure and principles applied in practice to establish his or her solvency profile. Furthermore, the company's business secrecy does not preclude the communication of information to the authority or the court, which must weigh up the interests involved (judgment of the CJEU of 27 February 2025 in case C-203/22). A mobile phone operator refused to allow an Austrian national (CK) to conclude a mobile phone contract, which would have[...]

Yet another European regulation with extraterritorial application?

Application of the AI Act to Swiss companies

Following on from the EU's General Data Protection Regulation (GDPR), the European Regulation on Intelligence artificial (AI Act) provides for a broad territorial scope, covering not only companies incorporated within the EU, but also some located in third countries such as Switzerland. Swiss financial intermediaries may therefore be affected by the AI Act, the extraterritorial dimension of which is presented in this commentary. A. Criteria for determining the territorial scope of the AI Act We will discuss here the two[...]

Artificial intelligence

FINMA’s expectations in terms of governance and risk management

Banks and financial institutions are increasingly integrating artificial intelligence (AI) into their internal services and processes (see e.g. Jotterand, cdbf.ch/1377). In particular, this use can present operational, legal and reputational risks (see e.g. Levis, cdbf.ch/1380), as well as a growing dependence on third-party suppliers, especially for AI models and cloud services. Added to this is the difficulty of assigning clear responsibilities in the event of errors in the AI system or model. The use of AI by banks and financial[...]

Plus d'articles en relation