Govt panel backs hybrid model: mandatory blanket licence for AI training
A central committee has proposed a simple trade-off: AI developers can train on any legally accessed copyrighted content, and creators get paid through a structured, government-backed system. The goal is to keep AI progress moving while protecting the economic rights of authors, publishers, and media houses.
Why this matters
Generative AI depends on massive corpora that include copyrighted works, yet permissions and payments are unclear. To address this, the Department for Promotion of Industry and Internal Trade (DPIIT) formed a committee on April 28, 2025 to examine copyright issues, assess the adequacy of current law, and consult stakeholders across sectors.
What was on the table
Positions were polarized. Tech companies asked for broad Text and Data Mining (TDM) exceptions. Creators and publishers argued for licensing that preserves consent and compensation. The committee also reviewed approaches in the United States, Japan, the UK, Singapore, and the EU, and noted a related case before the Delhi High Court.
- Full TDM exception (permission-free use)
- TDM with opt-out by rightsholders
- Voluntary licensing (creator-by-creator deals)
- Extended collective licensing
- Statutory licensing
Why other models were rejected
- Blanket TDM exception weakens copyright and cuts creators out of fair payment.
- Opt-out fails in practice: smaller creators may not know how to opt out, and once content is scraped, control is gone.
- Opt-out shifts the burden onto creators, lowers data quality, and demands heavy transparency that can slow product cycles.
- Individual licensing is costly, time-consuming, and unworkable at the scale startups need.
The hybrid model explained
The panel recommends a mandatory blanket licence. AI developers can use all legally accessed copyrighted content for training without seeking permission from each rightsholder. In return, creators receive compensation via a statutory payment right.
A central non-profit, formed by creators and approved by the government, would collect payments from AI developers and distribute royalties to members and registered non-members. Royalty rates would be set by a government-appointed body and remain open to judicial review.
- Single window for developers; lower transaction costs
- Fair, predictable payments for creators
- Broader, higher-quality training data for better model performance
- Level playing field for startups and incumbents
Dissent on record
NASSCOM opposed the proposal and argued for a TDM exception-commercial and non-commercial-with opt-out and safeguards. Their view: permissionless use with protective measures drives innovation more effectively than mandatory licensing.
What this means for government and legal teams
- Scope: Define "legally accessed" sources (licensed, purchased, publicly available, cached, user-uploaded, etc.).
- Coverage: Clarify whether the licence covers pre-training, fine-tuning, evaluation, and synthetic data loops.
- Outputs: Address whether any obligations attach to model outputs (e.g., attribution, exclusion lists).
- Rates: Establish a transparent method for setting, reviewing, and indexing royalty rates; enable sectoral differentiation.
- Governance: Specify composition, elections, audits, and conflict management for the collecting entity.
- Distribution: Define methodologies for allocating royalties across creators, including small and non-digital creators.
- Data and audits: Require reasonable usage reporting with privacy safeguards; allow audits without exposing trade secrets.
- Compliance: Set record-keeping timelines, penalties for non-payment, and dispute resolution pathways.
- Intersections: Coordinate with competition law, privacy law, and intermediary liability frameworks.
- Cross-border: Determine obligations for foreign developers training on Indian works or deploying models in India.
How developers can prepare
- Map data supply chains and maintain provenance logs for all training corpora.
- Budget for royalties and prepare to report usage metrics required by the collecting body.
- Segment "legally accessed" sources and remediate grey areas (scrapes without clear permission, dubious datasets).
- Stand up internal controls: source whitelists, removal workflows, and audit-ready documentation.
- Engage early with the collecting entity to register, clarify reporting formats, and test payment workflows.
Global context (for reference)
The EU permits TDM with an opt-out by rightsholders for commercial uses, which represents a different policy bet on consent and control. See the EU Copyright Directive overview for background.
- EU Directive (2019/790) on Copyright in the Digital Single Market
- WIPO: Collective Management of Copyright and Related Rights
Open issues to watch
- How "registered non-members" will be identified and paid without friction.
- Treatment of user-generated content platforms and creator consent tools.
- Interaction with pending litigation before the Delhi High Court.
- Safeguards for academic and non-profit research while maintaining parity with commercial use.
Bottom line
The committee's hybrid approach swaps case-by-case permissions for a statutory payment pipeline. If implemented well, it could reduce legal risk for developers and deliver predictable income for creators. The details-rates, reporting, audits, and governance-will determine whether it actually works on the ground.
If your policy or legal team needs structured upskilling on AI systems, compliance, and product risks, explore practical curricula here: AI courses by job role.
Your membership also unlocks: