AI's Frankenstein Effect: RMIT Expert Urges Laws to Protect Artists

Creatives warn AI products are 'Frankenstein-like,' stitched from unlicensed work that displaces gigs. They urge consent, transparency, licensing, and payment for training data.

Categorized in: AI News Creatives
Published on: Oct 10, 2025
AI's Frankenstein Effect: RMIT Expert Urges Laws to Protect Artists

AI Is Generating "Frankenstein-like" Products From Stolen Creative Work

Creative industry representatives at a recent Australian senate inquiry warned against giving large language models free access to Australian content. An expert from RMIT argued that much of the output replacing paid creative work is stitched together from unlicensed, uncredited data.

The claim is blunt: voice acting, background music, illustration, and visual design generated by these systems pull from existing work without consent. The end product looks like a composite-useful to buyers, harmful to the people who made the source material possible.

What the "text and data mining exception" means

The push for exceptions that let companies scrape and analyze copyrighted material at scale is framed as innovation policy. For creatives, it can function like mass appropriation if consent, credit, and compensation are missing.

Policy bodies are actively exploring this issue. See the Australian Attorney-General's Department on copyright and AI for ongoing consultation and updates: Copyright and AI (AGD). For a global overview of text and data mining debates, WIPO's explainer is useful context: WIPO on Text and Data Mining.

Why this matters to creatives

  • Displacement: clients swap custom briefs for fast, low-cost AI outputs trained on your market's styles.
  • Style mimicry: models imitate voice, tone, and aesthetics developed over years-without attribution.
  • Rate pressure: a surge of derivative content pushes rates down and shortens timelines.
  • Credit and consent: your name is off the work, your data is inside the work, and you weren't asked.

The core argument

RMIT's expert warned that the current approach creates "Frankenstein-like" products assembled from unlicensed creative labor. Beyond intellectual property, the concern is substitution: machines trained on your work are booked instead of you.

The call to lawmakers is clear: require consent for training data, ensure transparency about data sources, and create mechanisms to pay the people whose work fuels these systems.

What lawmakers should consider

  • Consent-based data access: no training on copyrighted work without permission.
  • Collective licensing: rights organizations that negotiate fees for training and usage.
  • Transparency: public registries of training datasets and clear audit trails.
  • Attribution and provenance: standards that track creative inputs and enable claims.
  • Enforcement: real penalties for unauthorized training and commercial use.

What you can do now

  • Update contracts: ban AI training on your deliverables without written consent and define usage scope.
  • Control exposure: set robots.txt and metadata to discourage scraping; limit high-res uploads where possible.
  • Use provenance tools: adopt C2PA/Content Credentials to signal authorship and detect tampering.
  • Join collectives: coordinate with unions, guilds, and rights orgs to push for licensing and fair pay.
  • Document your process: keep dated drafts and project files to prove authorship and support claims.
  • Learn ethical workflows: if you use AI, do it with consented datasets and clear client terms. For skill-building that respects your craft, see curated resources by job role: Courses by Job.

Bottom line

AI isn't a neutral tool when it's trained on unlicensed creative work. Without consent, credit, and compensation, it becomes a replacement engine built from the very labor it displaces. Clear rules-and practical safeguards-are overdue.