Who Should Own the Creative Commons in the Age of Generative AI

Generative AI uses vast creative works without compensating original creators, threatening their livelihoods. A public funding model could support creators and keep cultural content accessible to all.

Categorized in: AI News Creatives
Published on: Aug 09, 2025
Who Should Own the Creative Commons in the Age of Generative AI

Generative AI and the Creative Commons

Generative AI models are built on the collective work of countless creators—writers, musicians, journalists, coders, illustrators, photographers, and filmmakers. Behind every AI-generated output is a vast, unseen workforce whose creations are used without permission or compensation. These creators have rarely been acknowledged or paid by the tech giants profiting from their labor.

In October 2024, over 10,000 actors, musicians, and authors publicly warned that unlicensed use of their work to train AI threatens their livelihoods. Within months, the number grew to 50,000 signatories. Instead of tightening copyright restrictions, a different approach is needed: treat creative knowledge as a public good, funded and accessible to all—like roads, vaccines, or public broadcasting.

The Economics of Creative Work

Information often acts as a public good because it’s difficult to exclude people from accessing it, and the cost of copying is almost zero. When a good can’t be easily fenced off, markets tend to fail since people prefer to use it without paying. Digital content is especially vulnerable since online distribution is hard to control.

Generative AI models such as ChatGPT produce convincing responses by synthesizing massive amounts of data, much of it scraped from the public domain. Because this content is freely available online, stopping its collection is almost impossible. Some reports suggest major AI models have consumed nearly all publicly available internet content.

The opaque nature of these models makes tracing specific outputs back to original sources nearly impossible, complicating copyright enforcement. Governments hesitate to intervene, fearing that strict regulation might stifle innovation.

Competition with Creators

AI-generated content now competes directly with the original creators. News outlets have laid off reporters after adopting automated story generators. Image banks face floods of AI-created artwork, and software firms rely on AI tools to produce boilerplate code, reducing demand for junior developers.

Creators are pushing back. The New York Times is suing OpenAI for using its archives; prominent writers have launched class-action lawsuits; and major record labels are suing AI music generators for copying their catalogs. This backlash highlights the growing tension between AI firms and the creative workforce.

Meanwhile, governments invest heavily in AI development, often with few conditions, while the creative labor fueling these models remains unpaid. The environmental footprint of AI is also significant, with data centers consuming vast amounts of energy and water. McKinsey estimates generative AI could add $4.4 trillion annually to the global economy, yet funding for arts and culture is shrinking.

AI is Eating the Commons

There are two ways to respond: patch the market or build a public alternative. Fixing the market would mean stronger copyright protections, digital paywalls, and royalty payments to creators. Some platforms already license data to AI firms, but often the original creators get little of that revenue.

Stronger copyright enforcement risks creating digital feudalism, letting dominant platforms lock down content and extract value while sidelining creators. Given the sheer volume and diversity of content feeding AI, micro-licensing is impractical and costly. It would also hinder innovation and marginalize smaller players and independent creators.

Markets don’t always work, especially when excluding non-payers is hard and transaction costs are high. In such cases, alternative solutions are necessary.

The Public Alternative

Instead of trying to fix a market that struggles with public goods, governments should support the cultural commons and steer innovation toward public benefit. Just as taxes fund streetlights and public health, creative content production should receive public support, with outputs remaining in the public domain.

Public funding models like the BBC licence fee or France’s National Center for Cinema have a proven track record. They provide creators with stable income, encourage innovation that serves citizens rather than advertisers, and enable artistic risks. This approach preserves cultural heritage, enriches education, strengthens social bonds, and fuels democratic debate.

This model applies across all creative fields. Generative AI amplifies human creativity by increasing the volume and diversity of content. Essentially, AI acts as an “art multiplier,” creating new value from existing works.

Yet public funding for creators is declining. For example, UK support for national arts bodies dropped nearly 20% per person from 2009 to 2023. As AI-generated content grows and displaces human labor, funding must reflect these new realities.

Additional public spending requires responsible financing. One proposal is a levy on the revenues of major AI companies, collected by a national or international agency. As AI becomes more embedded in daily life, contributions could grow alongside industry profits.

Funds could be distributed through independent grant councils on multiyear cycles, supporting a wide range of disciplines and regions. This setup would give developers access to vast public knowledge without complex licensing, provide creators with stable income detached from volatile markets, and expand the cultural commons for everyone.

Every time we use AI tools like ChatGPT, we draw on millions of creators’ labor. Cultural production has become a global cooperative, and its funding model needs to catch up. As AI reshapes our creative landscape, society must align its values and policies to ensure fairness and sustainability.