Guest Article by Nick Allan and Adrian Aronsson-Storrier of Lewis Silkin
The past year has seen an explosion in the use of generative AI tools, with predictions that the tools will revolutionise video game development, from coding through to artwork and voiceovers. This uptake of generative AI has, however, taken place against a backdrop of continuing legal uncertainty surrounding whether the development and use of these AI models is consistent with the intellectual property rights of those whose creative works have been used to ‘train’ the models.
Game developers and publishers should take care if incorporating generative AI into their development process. The terms and conditions for many generative AI tools place the risk of copyright infringement on the user. For example, the Open AI terms of use for non-commercial users of ChatGPT and DALL·E seek to make users responsible for any copyright infringement risks arising from the use of generated content and require business users to indemnify Open AI for any losses arising from the use of generated content. In addition, game developers may not own IP in generative AI outputs – many jurisdictions (including the US) do not provide copyright protection for AI generated maters, and the terms and conditions of some free generative AI tools provide for the AI developer to retain any IP in outputs. This means that incorporating AI content into a final published game may make it more challenging to protect a game developer’s IP.
Other AI developers have sought to reduce the risk of copyright infringement from the use of their tools. For example, Unity recently released a generative AI tool called ‘Muse’, which generates “generate game-ready textures” and 2D art. The company has been careful to reduce the risk that outputs from Muse may infringe the IP rights of other creators, including by training their model on “original datasets that Unity owns and has responsibly curated” – an approach to reducing risk which is also adopted by Getty Images and Adobe for their AI tools. Unity does not, however, go so far as to offer any warranty that the outputs of the tool are entirely free from the risk of infringement, so developers should be aware that they could still be on the receiving end of a copyright infringement action if incorporating Muse outputs into their games.
A number of technology companies, including Microsoft (GitHub Copilot), Google (Vertex AI), and Adobe (Firefly) have gone further than Unity and offered to defend users if they are subject to intellectual property claims resulting from their use of the output of some of their generative AI tools. These indemnity offers are, however, not absolute, and in some instances do not apply where the user has intentionally sought to generate infringing materials (for example, buy entering a text prompt seeking an artwork “in the style of” a well-known artist).
Even if not using generative AI, game developers should hold suitable insurance against the risk of copyright infringement litigation. From our discussions with GG Insurance’s CEO, Philip Wildman, we understand that for the moment standard industry insurance policies continue to insure against the risk of copyright infringement, even if the infringing content is AI generated. We expect that, over the coming years, there will be additional copyright litigation over AI generated outputs, and that the outcome of those disputes will provider insurers with a better sense of whether to price-in the risk of generative AI or seek to exclude generative AI from standard policies in the future.
In summary, there are several practical steps that can be taken to reduce the risk of copyright infringement when using generative AI in the game development process:
- Use the right tools. Restrict your use of generative AI to providers that provide indemnities on their outputs or which have followed Unity’s approach of training their tool on owned or appropriately licensed content. Ensure that staff are exclusively using the right version of the tool as well, typically a “pro” or paid version.
- AI policies. Mitigate risks by having clear internal policies for staff on the use of generative AI, including restrictions on staff uploading code or other confidential materials into any tool which uses inputs or prompts for training purposes.
- Final game assets. If using generative AI tools in the early development process, for example to generate concept art or to quickly iterate ideas, make sure that all end-assets included in the final game are created by a human, to ensure that rights in the game can be enforced in jurisdictions that do not provide copyright protection to generative AI outputs.
- Documentation. Document any generative AI prompts and carry out a reverse image or text search of the AI generator’s output to check for close similarities with existing work to reduce the risk of infringement.
- Risk and reward. Consider your view on the trade-off between risk and reward of using generative AI tools, based on your position in the industry, player expectations and development cycle time – for example, AAA publishers may want to be more conservative than mobile developers.
- Contractual considerations. Be careful to comply with warranties and indemnities when contracting with third parties (for example, under a publishing or work-for-hire deal), and check to see whether any indemnities are ‘back-to-back’ with those in the generative AI tools. You should also be aware of warranties and restrictions when distributing games on online stores. For example, Valve requires developers to either “own or have adequate rights to” the content in their games which may currently restrict the distribution of games incorporating generative AI outputs on Steam, whereas the Epic store is more welcoming of content created using generative AI.
- Insurance policies. When renewing insurance policies, confirm that your intellectual property insurance continues to provide suitable cover for any AI generated outputs and be mindful of any potential exclusions in this regard.
Please get in touch with the team at Lewis Silkin to learn more (details below)
Nick Allan – Partner
- +44 (0)7834 176 677
- nick.allan@lewissilkin.com
Adrian Aronsson-Storrier – Practice Development Lawyer
- +353 1 566 4508
- adrian.aronsson-storrier@lewissilkin.com
GG Insurance Services
GG Insurance Services exhibit exceptional proficiency in addressing the emerging challenges and risks associated with the generative AI space, particularly within the video game industry. They offer a suite of innovative insurance solutions tailored to the unique needs of this rapidly evolving sector. With their deep understanding of both generative AI technologies and the gaming industry, GG Insurance is well-equipped to provide game developers with cutting-edge insurance coverage. This includes handling complex scenarios such as AI-related liabilities and other digital risks.
Their ability to assist extends beyond traditional insurance offerings. GG Insurance is committed to understanding the intricacies of AI integration in gaming, ensuring that their clients are not only protected against current risks but are also prepared for future advancements in technology. Their proactive approach in this niche demonstrates their dedication to supporting the growth and development of the video game industry in the age of generative AI.