Out-Law / Your Daily Need-To-Know

Out-Law Analysis 4 min. read

IP risks and uncertainties could hinder AI innovation in businesses


Developers, procurers and deployers of artificial intelligence (AI) tools should have a good understanding of the intellectual property (IP) uncertainties and risks and how they can be navigated to ensure the investment in AI and its potential can be fully realised.

Currently, businesses wanting to use AI to optimise their processes and their productivity, unlock new products and services, or to generate more ambitious outputs, which they can commercialise, are facing obstacles due to the lack of clarity surrounding IP rights subsisting in AI-generated content.

Regulatory protection of IP rights in AI outputs

The UK does not currently have an AI specific regulatory regime similar to the EU AI Act. There are existing IP and data protection laws which apply to AI outputs. However, they were not drafted with AI outputs in mind and currently lack clarity as to whether AI outputs are protectable and, if they are, who owns the IP in such outputs.

The UK courts have not yet been able to answer the ‘big’ AI questions. Despite the Supreme Court finding that an AI system cannot be an inventor for the purposes of a patent application, the question whether an AI-developed invention is patentable remains unanswered. Copyright protection for AI outputs has not yet been tested in the UK.

Under the UK Copyright, Designs and Patents Act 1988 (CDPA) ‘computer-generated works’ will be protected by copyright, provided they are original. A ‘computer-generated work’ is defined as a work that is ‘generated by a computer in circumstances such that there is no human author of the work’.

However, difficulties arise when attempting to apply this definition to the output of an AI tool, which will have been trained on content such as pre-existing text, images and videos created by humans. It could be said that there is some human authorship in the AI-generated output, because AI is likely to have been developed by one or more human(s), trained on human-created and curated content in order to produce an output, and also because a human will prompt the AI to produce an output.

Yet, it is very difficult to delineate where the human authorship ends and where the AI authorship begins. This uncertainty has led to ongoing debate as to how much human effort and creativity is required for an AI-generated output to be protected by copyright – and such effort and creativity is exactly what IP rights were designed to protect.

Currently, AI has no legal personality and is therefore incapable of owning IP rights, so it will likely be the AI developer, or possibly the user of the AI tool, who will own any IP rights that may exist in the AI-generated content. Ownership rights may also depend on the AI developers’ terms of use.

Whilst contractual terms may purport to delineate ownership, ultimately a statement of ownership will not provide IP protection unless this is recognised under IP laws, and could create further uncertainty, and the risk of disputes down the line.

Potential consequences for businesses

Due to the lack of certainty and risk of disputes, businesses may be reluctant to use AI to generate products, services and solutions as it may prove difficult to protect their outputs from copying and use by competitors.

Businesses may decide that the investment in AI is not worth making whilst AI-generated outputs cannot confidently be protected and commercialised.

The copying and/or infringement of AI-generated IP may also be difficult to evidence as the alleged infringer may have generated the output from the same or another AI tool. The calculation of damages for any such infringement is also unclear.

When generating AI outputs, it will also be important to protect trade secrets. If a trade secret is used as part of an input prompt, then this could result in disclosure of the trade secret resulting in it no longer benefiting from protection. When information or data is inputted to an AI tool, this information or data is assimilated by the tool and could be available to the developer for use within the tool and/or for training the tool, and could also be accessible by other users of the tool.

As well as risking the disclosure of trade secrets, this could result in the loss of novelty for the purpose of filing a patent application in respect of related inventions and therefore loss of the right to pursue a patent application. These risks should be addressed by ensuring appropriate contractual protections are in place when procuring the use of AI tools or ensuring no trade secrets or confidential information are used as inputs when deploying AI tools.

There may also be data protection risks when using AI tools. It is often far from clear as to what happens to data once it has been inputted into an AI tool – and such data can include personal data. The processing of personal data is subject to strict compliance requirements under data protection laws, including in the context of AI. Data protection regulators have been keen to emphasise the importance of GDPR compliance when training and using AI tools, with some regulators banning the use of certain tools until data protection compliance issues are resolved.

Businesses may be able to mitigate any such risks to trade secrets, confidential information and personal data by deploying ‘closed’ AI tools which do not connect with any external systems, and therefore do not send confidential or personal data outside of the business. These are particularly important for businesses which collect and process large amounts of confidential and/or personal information which they propose to input into AI tools.

Whilst many businesses are implementing AI for routine organisational and administrative tasks, such as the transcription of meetings, where AI is utilised for use-cases that could entail the generation of valuable outputs, such investment could be jeopardised due to the IP uncertainties explained above.

This could represent a real lost opportunity particularly for research and development intensive, design-intensive and other businesses for whom AI could, for example, represent the key to unlocking highly efficient drug discovery, personalised healthcare, predictive analytics and automation, to name but a few examples.

For these use-cases, ensuring human involvement in the ideation, inventive and creative processes alongside AI will be key in the short term to achieve valuable IP protection for outputs and, for other regulatory reasons, such human involvement will be important in the long term.   

Therefore, provided outputs are generated with sufficient human involvement, the IP risks and uncertainties can be navigated and should not deter or have a wider ‘chilling’ effect on AI technology development as a whole.

Co-written by Concetta Dalziel.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.