
Artificial intelligence and infringement of third-party rights
Adjusting the relative effect of warranty clauses

Julien Levis
(Translated by DeepL)
A bank can benefit from guarantee clauses stipulated in the standard contracts of suppliers of AI systems. These clauses provide for compensation in the event of infringement arising from the use by such systems of training data protected by intellectual property rights. This commentary examines the scope of such clauses, their conditions and the possible need to extend their benefit to third parties to the initial contract.
The quality of the responses provided by AI systems depends in particular on the volume of data dedicated to their training. The massive collection of this data is likely to infringe the rights of third parties (intellectual property rights, personal data protection rights, ethical issues, etc.).
The action brought in the United States by the New York Times – against Microsoft and several companies in the OpenAI group – alleges infringement of the newspaper’s intellectual property rights over articles used to train ChatGPT. Closer to home, and this time from the perspective of competition law, the French competition authority (Autorité française de la concurrence) recently handed down a €250 million fine against Google. At issue was the company’s collection of data – in this case in breach of the rights of publishers and press agencies – in order to train its Bard AI system.
Banking institutions examining the advisability of using large language models (LLMs) are weighing up the legal pitfalls to be avoided. These include the prospect of legal action by third parties holding rights to the data processed by an AI system (invoking in particular title 5 of the LDA or chapter 8 of the LPD). The disputes mentioned above – as well as certain agreements signed in 2024 by LLM providers – illustrate in particular the dynamics at work in the area of intellectual property rights. The acuteness of the risk of litigation invoking these rights seems to have led LLM suppliers to fear that it would slow down the adoption of AI systems by their customers. Guarantee clauses have gradually appeared in their standard contracts, in particular the “Copyright Shield” (in OpenAI’s contracts, see clause 3.b under the link) or the “Customer Copyright Commitment” (from Microsoft presented under the definition of this term under the link). These clauses focus on possible infringements of intellectual property rights arising from the use or distribution ofoutput data.
However, AI system suppliers generally list a series of factors that could disqualify a customer from benefiting from the warranty. These exclusions include the presence – in the prompt formulated by the customer – of data that is itself protected, knowledge that the customer should have had of the rights encumbering the output data, infringement of other intellectual property rights (trademark, neighbouring rights) or the result of modifications made by the customer to the output data. These suppliers also frequently list technical risk mitigation measures, without which the customer cannot claim the benefit of the guarantee.
The extent of these exclusions suggests that it may be difficult to enforce these clauses. To this series of obstacles will be added procedural challenges (linked in particular to the vagueness of the definition in some of these clauses of the nature of the guarantee granted – the Customer Copyright Commitment thus refers to a right to be defended by Microsoft, the implementation of which is open to question) and private international law challenges (contracts containing these clauses are generally subject to foreign law and/or jurisdiction).
Another factor of complexity will arise each time the bank contracts not with the initial supplier of the AI system but with an ‘intermediary’ service provider adapting such a system to the specific needs of the banking business. The bank will then be a third party to the contract stipulating the guarantee clause. To try to benefit from it, it will have to obtain from its direct contractor the transfer of part of the rights deriving for the latter from the clause. It will also have to protect itself against the risk that its service provider does not invoke the guarantee itself.
The disputes and contractual practices observed today in relation to intellectual property on data processed by AI systems appear to be the forerunner of wider issues. Other rights – such as those relating to personal data or ethical considerations – are affected by the processing of data by LLMs. These issues, together with the limitations on the clauses currently included in the standard contracts of the main providers, will probably lead banks to try to obtain more comprehensive guarantees.
This is a particularly tricky exercise when dealing with large technology companies, which are reluctant to amend their standard contracts. In contrast, when the AI system is acquired from intermediary service providers, the negotiating positions of the parties will be more balanced. These intermediary providers will also be able to adapt the AI system in order to mitigate certain risks. Not least, banks may find it advantageous to negotiate the transfer of guarantees granted by LLM providers to these intermediary service providers in order to aggregate the guarantees offered – both by LLM providers and by intermediary service providers – as well as the assets forming the basis of these guarantees.