
Artificial intelligence no doubt offers great opportunities for human creativity. The UK is well placed to be at the forefront for relevant AI-development, but AI is also a highly disruptive technology which could pose risks to the livelihoods of human creators if not appropriately regulated.
The rise of generative AI in recent years has heightened anxiety amongst creators and the public more widely: people are concerned for the future of human creativity given machines’ ever-expanding ability to generate text, images, video and code.
The goal must be the development of legal and ethical frameworks that step away from binary approaches built on the idea that AI innovation and human creativity are essentially in competition and to be treated in isolation from each other.
Dr Caroline Emmer De Albuquerque Green, Oxford Institute of Ethics in AI, and Chris Morrison, Bodleian Libraries
While ‘AI-innovation’ and ‘AI-governance’ can often be pitted against each other in current political narratives, workable legal and ethical frameworks will help leverage the benefits of AI for human creativity and vice versa as well as building public trust.
AI governance frameworks are developing rapidly, including the governance of AI and intellectual property.
The UK government recently undertook a public consultation on proposals for amending existing copyright legislation to foster AI-innovation, but the proposals were met with widespread criticism by the creative industries.
The government’s proposals force respondents into two opposing views, essentially creating an unnecessary division between the AI and the creative industries. This also comes at the expense of key stakeholders, like researchers and civil society, whose voices matter too.
The focus of the government proposals is largely on copyright’s impact on creative industries and AI sectors, reinforcing a divisive “copyright wars” narrative instead of exploring wider ethical legal frameworks.
This does not take into consideration that licensing of copyright works is a well-established route for many AI companies and that collaboration is taking place across many sectors – including with universities and their libraries.
Likewise, the solutions that the government propose focus on an ‘opt-out model’ which would require creators to indicate that they don’t want their work to be used for training commercial generative AI models.
Given the widespread concern about generative AI, it seems likely that most professional creators would choose to opt out of this provision, making implementation of this solution an expensive exercise with little benefit.
Instead, the goal must be the development of legal and ethical frameworks that step away from binary approaches built on the idea that AI innovation and human creativity are essentially in competition and to be treated in isolation from each other.
Following lessons learned from existing research, a more inclusive, deliberative process would follow several core principles:
Inclusive: AI-development is not just a matter for tech experts and entrepreneurs.
If AI is to benefit creativity and humanity, its development must consider the collective needs of creators, technologists, policymakers, researchers, and the wider public. The consultation will thus include multiple stakeholders, with different perspectives.
Well informed: In order to participate in discussions about the law, people need to be informed.
Copyright literacy has been defined as ‘Acquiring and demonstrating the appropriate knowledge, skills and behaviours to enable the ethical creation and use of copyright material’.
Copyright is a complex and technical area of law that is understood and experienced differently by different communities. Although not everyone needs to become an expert in copyright law, using innovative educational resources and empirical evidence on the way copyright impacts on the creative economy makes it possible to have informed and inclusive discussions about how laws should work.
Equitable: Deliberative and inclusive processes will be successful only when power dynamics between stakeholders are challenged.
All participants must have an equal opportunity to influence the outcomes. In practice, this means a willingness to listen to sometimes opposing perspectives and finding workable compromises.
Accountable: The consultation must take place transparently, with clearly defined aims, a justification for how the process is being undertaken and how outcomes were determined.
As AI continues to evolve, shaping the future of creativity, governance must keep pace by fostering a legal and ethical landscape that supports both technological advancement and human ingenuity.
By engaging a diverse range of voices, from creators to technologists, civil society groups and policymakers, frameworks can be developed that harness AI’s potential while safeguarding the rights and livelihoods of those who drive human creativity forward.
“The University of Oxford is a collegiate research university in Oxford, England. There is evidence of teaching as early as 1096, making it the oldest university in the English-speaking world and the world’s second-oldest university in continuous operation.”
Please visit the firm link to site