Artificial Intelligence and Copyright: Where Does the UK Stand?
The UK Government’s report on the copyright and AI consultation was recently published. While the report confirms that balancing the interests of copyrights holders and AI developers is a complex exercise, it also provides an indication of likely scenarios to consider in this fast-evolving environment.
The consultation focused on whether AI developers should be permitted to use copyright protected works for training purposes without prior authorisation and, if so, under what conditions. The Government initially favoured a commercial text and data mining exception, coupled with a mechanism enabling rights holders to reserve their rights and with enhanced transparency obligations on AI developers.
However, the consultation revealed a clear and entrenched divide. Creative industry stakeholders overwhelmingly supported a licensing first model, emphasising the importance of consent, control and remuneration. The tech industry, on the other hand, warned that mandatory licensing could inhibit AI development and undermine the UK’s competitiveness as an AI hub.
Below are the key points on the Government’s position:
- Opt-out licensing model: The Government stepped away from its previously preferred opt out model and committed to further evidence gathering, technical development and continued monitoring of case law and international developments. As a result, the current copyright framework remains unchanged, with fundamental questions, such as whether AI training involves infringing “copying”, continuing to be tested through litigation, noting that it remains a priority to ensure that the UK enforcement framework remains fit for purpose enforcement barriers are identified and mitigated.
- Transparency: This is an area of relative convergence. The consultation emphasises the importance of greater (but proportionate) clarity around training data sources, provenance of outputs and accountability mechanisms, confirming that there are issues intersecting with data protection, consumer protection and broader AI governance frameworks.
- Computer-generated works: Most respondents pointed out that works generated solely by AI should not be protected and changes may be required to ensure that works created with the assistance of AI remains protected.
- Labelling: The principle of labelling content generated using AI has been well-received. However, a more nuanced approach to AI-assisted content, as well as different approaches to different media types will be required. For now, the Government will only monitor best practices aiming to align with international standards.
- Digital replicas and deepfakes: There’s awareness of the increasing issues generated by use and monetarisation of someone’s replicas without permission, especially in a jurisdiction where there are no image/personality rights per se, but a patchwork of laws. There are no specific proposals to introduce on personality rights, however the Government will gather views on whether the current legal framework remains fit for purpose.
Takeaways
Rights holders should continue to assess how their content is accessed and used, consider technical or contractual mechanisms for licensing and rights reservation.
AI developers should remain cautious when sourcing training data, ensure governance and record keeping processes are robust, and factor copyright risk into product development and deployment strategies.
While legal uncertainty has not changed, AI development remains encouraged, the UK Government dialogue with the industries remains open, and international standards are increasingly desirable.
By: Serena Totino and Simon Casinader
