Creative Crossroads: Will AI Kill the Arts?
I arrived at the Scottish Parliament with a few minutes to spare before a session focused on the future of the creative industries. Sitting down, I met Francis Bondd, who jokingly described himself as “an engineer during the week, rockstar by the weekend.” Dressed sharply with a black suit, white pearls, and a silver hoop, he clearly embodied a stage persona. Originally from Nigeria and self-taught on the guitar, Francis moved to Scotland two years ago.
He shared the challenges he faces as a migrant artist: “When you walk into a room, the first question is always, ‘what’s he going to sound like?’ I have to prove myself 10 times more than anyone else.” With nearly 9,000 Instagram followers and hopes of playing at the Fringe Festival, his excitement is palpable. But then his tone shifted. “I’d feel terrible,” he admitted quietly, “if AI takes all that away from me.”
Widespread Fear Among Creatives
This fear isn’t unique to Francis. Thousands of artists across the UK share concerns about AI’s impact on their work. Tools like ChatGPT and DALL·E 2 can create entire stories or images in seconds, blurring the line between technology and human creativity. This has sparked one of the biggest political conflicts of the year.
Baroness Beeban Kidron has led efforts in the House of Lords to amend the Data (Use and Access) Bill. The goal: require tech companies to disclose their use of copyrighted material in AI training. She warned that ignoring this would allow “widespread theft.” High-profile musicians like Elton John, Dua Lipa, and Paul McCartney support her stance.
However, the House of Commons rejected these measures repeatedly, citing financial privilege to block the amendment. This parliamentary move has raised alarms about the future of the creative industries, which contribute £124bn annually to the UK economy.
Voices from the Creative Sector
Clementine Collette, a fellow at UK Research and Innovation’s Bridging Responsible AI Divides (Braid) programme, expressed her frustration: “Transparency is the most basic right artists deserve to enforce copyright effectively.” Her sentiment resonates widely.
Jade Law, CEO of Wardog Studios, recalled finding her own work in AI training datasets without consent. “I’m so upset,” she said. “It’s all there and there’s nothing we can do to stop it.”
AI and Public Sector Controversies
The concern has reached Scottish politics. Conservative MSP Sandesh Gulhane questioned First Minister John Swinney about ScotRail’s AI train announcer, ‘Iona’, which allegedly used Scottish artist Gayanne Potter’s voice without permission. Swinney confirmed the issue was being addressed.
ScotRail is currently in talks with ReadSpeaker, the tech company behind Iona. Yet, the AI system isn’t listed in the Scottish AI Register, a public database meant to provide transparency around AI use in government projects. The register aims to expand to all public sector bodies soon.
Artists Are Not Against AI, But Want Control
Most creatives do not oppose AI outright. A report by the Design and Artists Copyright Society found that while nearly 75% of UK artists worry about unauthorized use of their work in AI training, 84% would share their work under proper licensing agreements.
The problem is that many tech companies see copyright and AI as incompatible. In a submission to the Lords’ Communications and Digital Committee, OpenAI stated that "it would be impossible to train today’s leading AI models without using copyrighted materials."
Government’s Stance and Artists’ Concerns
The UK government appears to side with innovation. Prime Minister Keir Starmer called AI the “defining opportunity of our generation” when launching the AI Opportunities Action Plan. Among 50 commitments, it promised to reform text and data mining laws to remove barriers to innovation.
For many artists, this sounds like a warning. Critics say the government prioritized tech growth over copyright protections. The AI and copyright consultation closed shortly after the plan’s launch, leaving many feeling unheard.
Jade Law puts it bluntly: “Starmer is about to throw creatives under the bus to give big tech companies what they want.”
Calls for a Fairer Data Model
Clementine Collette and other Braid researchers advocate for an opt-in model for AI training data. This would require explicit permission before using someone’s work, unlike the current opt-out system that assumes consent by default.
She warns that the opt-out approach forces creators to fight to protect their own work, undermining their rights. Caterina Moruzzi, also from Braid, urges more “granular” control, allowing creators and audiences to track who contributed to what part of a work and when.
This detailed tracking could clarify responsibility and authenticity in AI-created content. It’s a way to manage challenges posed by AI’s growing role in creativity.
New Forms of AI-Enabled Art
Moruzzi finds promise in AI’s ability to expand creativity rather than replace existing art forms. An example is the generative AI platform Invoke, which recently received the first US copyright protection for an AI artwork. The piece involved significant human input in selecting and arranging AI-generated fragments, akin to collage art.
This layered approach excites many who see AI as a tool for new creative possibilities.
Risks of Cultural Homogenization
Moruzzi also warns about the cultural risks of AI. Most leading AI models are trained on western data, which could narrow society’s view of art and beauty. Many global traditions—like oral histories—aren’t digitized and won’t be captured by AI.
This risks creating a homogenized culture dominated by western norms. Similar to social media echo chambers, AI could reinforce narrow definitions of “good” art, sidelining diverse voices.
Impact Beyond the Arts
Clementine Collette highlights another risk: AI’s inability to discern truth could distort art’s value, especially when generated from misleading prompts. Models might perpetuate stereotypes or falsehoods unintentionally.
Ed Newton-Rex, founder of Fairly Trained, warns that AI’s reliance on web crawlers could deepen paywalls and limit open access to information. This threatens researchers, journalists, and anyone who depends on free internet resources.
Potential Benefits with the Right Protections
Despite fears, AI offers opportunities. Francis Bondd sees a future where AI benefits creators if fair protections exist. “If I become a big name, there’s a channel for benefits to come back to me,” he said.
Moruzzi agrees. AI can democratize access to creative tools and help artists reach wider audiences by embracing diverse cultures and traditions.
Examples include the University of Glasgow’s Our Heritage, Our Stories, a virtual national collection connecting community-generated historical content using AI. Other projects combine AI and eye-tracking tech to help people with limited mobility create art.
Still, without balanced regulation, the industry’s future remains uncertain. Moruzzi notes that AI already replaces tasks that early-career creators rely on for income, potentially making it harder to launch new artistic careers.
What’s Next?
The AI Bill, currently under discussion in the House of Lords, aims to close regulatory gaps around AI technology. It could reignite political tensions and impact the creative sector heavily.
BD Owens, president of the Scottish Artists Union, warns that even with future legislation, the damage to artists’ livelihoods may be irreversible.
If you’re a creative professional wanting to understand how AI tools can fit into your work while protecting your rights, explore Complete AI Training for courses and resources tailored to creatives navigating AI’s impact.
Your membership also unlocks: