Protecting Creative Workers from AI: Urgent Steps Needed to Preserve Human Originality and Rights
Equity urges urgent action to protect creative workers from AI data scraping without consent or payment. Transparency and fair remuneration must be enforced to preserve human originality.

Urgent Action on AI Needed to Protect Creative Workers
Equity is calling for urgent measures on artificial intelligence following a new report highlighting the need to preserve human originality and conduct ongoing impact assessments. This report reflects the concerns of Equity members and was presented in parliament to the All-Party Parliamentary Group on the Future of Work.
Equity General Secretary Paul W Fleming addressed lawmakers, warning that the current AI approach is like “legalising theft” of creatives’ work through data scraping without consent or payment. He urged support for Baroness Kidron's amendment to the Data Bill to safeguard creative workers’ data and personality rights.
The message is clear: protecting human-made stories in our culture demands transparency from AI companies and strict government enforcement of UK copyright laws. Creators’ rights to consent and fair remuneration must be respected.
Key Findings and Recommendations
Tom Peters, Equity’s Head of Policy and Public Affairs, explained that Equity members were part of extensive research involving 335 freelance creatives, carried out by Queen Mary University of London, the Institute for the Future of Work, and The Turing Institute. The study reveals significant impacts of Generative AI on the UK’s creative economy and calls for urgent protective action.
- Fair Remuneration: Enforce current ownership rights and develop new systems to ensure revenues from GenAI-generated content fairly return to the original human creators.
- Legislative Reform: Update UK laws to better protect freelance creatives by covering employment rights, skill development, and AI-specific challenges, in collaboration with unions and industry experts.
- Inclusive AI Governance: Integrate creative workers’ perspectives into AI regulation, including all roles affected by AI in the creative sector.
- Stronger Regulation for AI Firms: Require transparency, swift action on copyright infringement, and compensation when AI uses creative works without permission.
- Ongoing Impact Assessments: Regularly track how GenAI affects job quality, income security, and working conditions for creatives, with clear accountability.
- Preserving Human Originality: Implement “human made” watermarking and provenance tools to help consumers identify AI-generated content and support original human creativity.
- Workforce Training and Empowerment: Offer education on intellectual property, labour rights, contract negotiation, anti-mimicry tactics, and how to address bias and misrepresentation in AI tools.
Conclusion
The report was formally launched at the SXSW London technology festival. Tom Peters emphasized that the findings provide a clear plan to ensure AI adoption does not undermine the UK's creative industries. He called out big tech companies for “industrial-scale theft” of creators’ work used in AI models without consent or fair pay.
He stressed the need for immediate action to keep human-made stories central to culture by enforcing copyright law and demanding transparency from AI firms. According to Peters, creators’ rights to consent and compensation are non-negotiable.
David Leslie, Professor of Ethics, Technology and Society at Queen Mary University of London and CREAATIF Project Lead, said freelance creatives are especially vulnerable to AI’s impact because they often lack protections that come with salaried jobs. The project’s recommendations offer practical steps to address these challenges while recognizing potential ethical opportunities GenAI can provide if handled responsibly.
For those interested, the full report and its recommendations are available on Queen Mary University’s website.