Author
Case report on Getty Images -v- Stability AI
When judgment was handed down in the biggest copyright case of the decade, businesses hoped that the Getty Images -v- Stability AI case would determine whether the training of an AI model and its outputs were legal. Sadly, for everybody in both the creative industries and the world of tech, the questions of whether such activities infringe copyright remain unanswered.
The judge did decide some peripheral issues but has left both the creative industries and the AI community without a clear winner.
The facts
At the heart of the case was the use by Stability AI of what Getty Images called its “lifeblood”; the photo library established in the 1990s using the Getty family’s oil money. Stability AI trained its AI model, “Stable Diffusion“, on pictures from this library without Getty Images’ consent. The training took place outside the UK.
Getty presented to the court a series of output images created by various versions of Stable Diffusion which it said could be traced back to photos in the Getty Images library. Damningly, a number of these Stable Diffusion creations reproduced Getty Images’ watermarks which seemingly proved Getty’s intellectual property rights claims for copyright, database right and trade mark infringement (as well as passing off).
However, during the trial Getty Images abandoned its primary copyright and database right infringement claims because:
- Getty acknowledged that there was no evidence that the training and development of Stable Diffusion took place in the United Kingdom such that no acts of infringement took place in the UK (in a parallel case in the USA, it will be determined whether these activities amount to an infringement of copyright under US law);
- Stability AI had blocked the user prompts which generated the allegedly infringing AI outputs such that Stable Diffusion’s outputs were no longer capable of generating images which could be readily linked to Getty Images’ copyright works; and
- having abandoned the “training and development claim” and the “output claim”, Getty decided not advance its claim for database right infringement probably on the basis that it could no longer prove that Stable Diffusion “extracted and re-utilised” a substantial part of the Getty Images photo library.
This left Getty with its trade mark infringement claim in respect of the watermarks on the AI outputs and a claim that the Stable Diffusion AI model itself was an infringing article such that Stability AI was liable for secondary copyright infringement on the basis that it had imported into the UK, possessed and dealt with an infringing copy.
A key issue in connection with the secondary copyright infringement claim was whether the licensed photos relied on by Getty Images were subject to exclusive licences providing it with concurrent rights with the photographer copyright owners so as to be jointly entitled to a remedy for copyright infringement. The court found that a number of the licences were “exclusive” so as to allow Getty Images to bring its claim but it also found that many were not exclusive licences. It is important to realise that if you hand over your intellectual property rights to a business partner to allow those rights to be commercialised, getting the licence terms correct is essential to allow the full value of those rights to be realised not only through licensing but also through enforcement.
The big win for Getty Images and a key take away from the case, is that the creators of AI models can be found liable for infringing outputs from their AI tools. This was determined on the basis of the judge’s findings of “double identity” and “confusion” trade mark infringement under sections 10(1) and 10(2) of the Trade Marks Act 1994 (“TMA”); Getty was unsuccessful in its “detriment” claim under s. 10(3) TMA and in its passing off claim.
The “double identity” and “confusion” trade mark infringement findings come as no surprise given the clear reproduction of the Getty Images’ trade marked names which appeared as watermarks on some of the AI output images. However, as the judge said these finding were both “historic and extremely limited in scope“, they will not deliver Getty Images much in the way of damages when quantum is finally determined.
However, what users of AI tools can take from this, is that where there is a clear link between an intellectual property right and an AI output then the AI company will be liable if there is a finding of infringement. Hypothetically, on this basis it is possible to imagine Taylor Swift bringing a case against an AI song generator in respect of a prompt “create me a ‘Taylor Swift‘ song” where the output lyrics reproduce a substantial part of her back catalogue. Although, in such a scenario, what would be equally interesting (and another issue not addressed by the Getty case) is whether such AI output could be defended in the UK on the basis that the lyrics are a “parody” or a “pastiche”.
The important element of the decision for AI Users is the fact that an AI company can be found liable for the output from its AI model. One of the arguments Stability AI put forward was that it was only because of the Getty Images user prompts put into the Stable Diffusion AI tool that its tool created infringing AI outputs. Notwithstanding this “user prompt” argument, Stability AI was still liable for the output from its tool. Therefore, when commissioning an AI tool, users should check the terms and conditions and see what these say about the AI company’s limitation of liability and indemnification of users in respect of the output.
Notwithstanding that Getty Images established it controlled the exploitation of some of the photos in issue on the basis of exclusive licences, its claim of secondary infringement of copyright failed.
The problem for Getty was that it struggled to persuade the court that an AI model amounted to an “infringing copy”. Getty accepted that Stable Diffusion itself did not comprise a reproduction of any photos, but it argued that the definition of “infringing copy” was sufficiently broad to encompass an article (including an intangible article) whose creation or “making” involved copyright infringement. Getty pointed out that it was common ground that (i) the training of the AI model involved the reproduction (by means of storage) of the photos; and (ii) the “making” (or optimization) of the AI model weights requires their repeated exposure to the photo training data. Getty argued that this “making” satisfied the definition of an “infringing copy”.
Stability highlighted that its AI model was trained on copyright works in the United States, copies of those works were never present within its AI model and the AI model cannot be an “infringing copy” where the AI model has “never had the slightest acquaintance with a UK Copyright Work”. Stability also highlighted the fact that the act of training the AI model weights ultimately did not involve storing or reproducing the images in those weights.
The judge agreed with Stability AI. She said “Stable Diffusion … does not store or reproduce any Copyright Works (and has never done so) [and so] is not an ‘infringing copy’ “.
Despite the secondary copyright infringement decision going against Getty, the finding that an “article” (for the purposes of the Copyright, Designs and Patents Act 1998) includes intangible objects, such as AI models, helps everyone understand how an AI tool might be found to infringe copyright where the facts support the legal arguments. In other words, an AI company will not be able to escape liability on the basis that its tool is an “intangible” piece of software if that software stores copies of infringing works.
Conclusion and take aways
Companies wishing to use AI tools now know that “AI Output” can infringe the intellectual property rights of others. AI companies know that they can be liable for such output from their tools. We can infer that if an AI model stores infringing works within its tool then that storage will mean that the AI tool infringes copyright.
However, we do not know (at least from a UK legal perspective), if using copyright works to train the AI model amounts to an infringement of intellectual property rights.
On the plus side for users of AI, the case tells us that UK users can lawfully use AI tools which have turned a third party’s media assets into algorithmic data provided this transformational process takes place outside the UK and the AI output does not contain any traces of the underlying works.
Print article