AI policy and journalism: what legal teams need to know now
AI is already embedded in newsroom workflows, from transcription to content moderation. The harder problem is policy: how governments define, constrain, and enable its use across the information space.
Recent analysis by the Center for News, Technology and Innovation (CNTI) reviewed 188 national and regional AI strategies, laws, and policies. It focused on seven components with direct implications for journalism: freedom of expression, manipulated or synthetic content, discrimination and bias, intellectual property and copyright, transparency and accountability, data protection and privacy, and public information and awareness.
Key signal for counsel
Mentions of "journalism" in AI bills are increasing. That's a double-edged sword: it shows awareness of sector impact, but it can invite governments to define the profession and its boundaries. Legal teams should press for functional definitions and avoid status-based licensing or credential tests for who counts as a journalist.
Latin America: active debate, few enacted laws
In Latin America and the Caribbean, CNTI identified 80 strategies, policies, or laws, with five explicitly naming journalism or journalists. The region leads on explicit references, yet actual enactment remains limited. That creates room for good law-and risks to editorial independence if texts are drafted loosely.
Two legislative designs to watch
Ecuador's 2024 Organic Law for the Regulation and Promotion of AI is described by CNTI as among the "most comprehensive." Article 31 pushes content recommendation systems to improve equitable access to public-interest content from local, community, and independent outlets. Article 32 adds safeguards against algorithmic censorship and manipulation, emphasizing transparency and appeal rights for moderation and curation decisions.
Brazil's Bill 2338 has evolved through amendments. A notable strength: clear definitions of what is and is not covered, including "AI system," "general-purpose AI," "generative AI," "text and data mining," and "information integrity." Specificity matters; many terms lack consensus and drift over time without stable drafting.
Drafting guidance for policymakers and in-house counsel
- Define scope precisely: Anchor terms (AI system, GPAI, recommender, moderation, high-risk) and carve-outs. Avoid catch-alls that chill legitimate newsgathering.
- Protect press freedom: Avoid state definitions of who is a "journalist." Focus on activities (newsgathering, editing, publishing) and public-interest functions.
- Manipulated content with safeguards: Prohibit deceptive uses while allowing limited, documented journalistic uses (e.g., source protection) with editorial oversight and audit trails.
- Due process for automated decisions: Require notice, clear reasons, human review, and timely appeals for algorithmic curation and moderation that affect reach, revenue, or access.
- Transparency with privilege protection: Calibrate disclosures so audits do not expose sources, unpublished materials, or newsroom methodologies protected by law.
- Risk-based audits: Independent, proportionate, and focused on discrimination, misinformation risks, and safety-without forcing disclosure of sensitive editorial data.
- Pluralism requirements for recommenders: Encourage access to local, community, and independent media, consistent with competition law and freedom of expression.
- Differentiate duties by role: Developers, deployers, platforms, and publishers face different responsibilities and liabilities.
- Criminal law as last resort: Reserve for serious harms (e.g., child sexual abuse material). Prefer civil and administrative tools for most violations.
- Align with existing regimes: Data protection, copyright, consumer protection, and anti-discrimination should interlock, not conflict.
- Clarity on TDM and IP: Set rules for text and data mining, licensing, and attribution that support both innovation and press sustainability.
Compliance checklist for media organizations using AI
- Governance: Assign product, legal, and editorial owners for AI use; maintain a system inventory; set approval thresholds for new tools.
- Policies: Draft internal standards for synthetic media, corrections, disclosure, and human-in-the-loop review for high-risk outputs.
- Data hygiene: Apply data minimization, consent checks, and retention limits; segregate sensitive source material.
- Bias and safety testing: Run pre-deployment and periodic tests for discrimination, hallucinations, and misinformation risks; log results.
- Vendor management: Update DPAs and SLAs for model updates, incident notice, security, and audit cooperation; verify provenance tools.
- Appeals and records: Track platform moderation events affecting content; document challenges and outcomes.
- Training: Educate staff on acceptable use, disclosure rules, and red flags for synthetic media.
Who needs a seat at the table
Legislative working groups should include publishers, editors, product teams, engineers, AI researchers, civil society, and other relevant stakeholders. This mix reduces blind spots and keeps drafts current with technical realities.
Open issues to monitor
- How "information integrity" is defined and enforced without policing lawful speech.
- Standards for watermarking, provenance, and authenticity-and their limits at scale.
- Allocation of liability across model providers, platforms, and publishers.
- Cross-border conflicts between AI rules, press laws, and data protection regimes.
Why this matters now
Most countries in the region have at least one initiative touching AI and journalism. The window is open to codify safeguards-clear definitions, due process, pluralism, and transparency-without creating new risks to editorial independence.
Further reading
For the research cited here, see the Center for News, Technology and Innovation's work on journalism and AI: newsinnovation.org.
If your legal team needs a quick way to get up to speed on AI concepts by role, explore these resources: AI courses by job.
Your membership also unlocks: