Holding Big Tech to Account Through a Catholic Lens on AI Ethics

AI is changing products, but counsel must center people, not just liability. The call: codify duty of care, confront exploitation and child safety, and plan for sustainability.

Categorized in: AI News Legal
Published on: Nov 24, 2025
Holding Big Tech to Account Through a Catholic Lens on AI Ethics

Big Tech Ethics in the Era of AI: What Legal Teams Should Do Now

AI is changing how products are built, sold, and used. The harder question for counsel: what do the companies behind those systems owe people - not just legally, but ethically?

At a one-day conference hosted by The Catholic University of America Columbus School of Law, speakers pressed that point from multiple angles: human dignity, exploitation risks, soft-law frameworks turning into enforceable rules, and sustainability pressures that are moving from PR to material risk.

Keynote: Start with the human person, then write the rules

Taylor Black, director of AI & Venture Ecosystems at Microsoft and founding director of Catholic University's Institute for AI & Emerging Technologies, set the tone: Catholic social teaching begins with the person, not the system. The core prompt for counsel isn't "What can this tech do?" but "What does it do to people - who is helped, harmed, or left behind?"

Black called for three practical moves: shared frameworks for responsible AI across companies, cross-sector accountability structures, and investment in formation - not only technical training. For legal teams, that means codifying principles into policy, contracts, and audit routines that survive executive turnover and product pivots.

Panel: Big Tech as a facilitator of exploitation

Danielle Bianculli Pinter argued that corporate responsibility teams often care but get sidelined, while the sector spends heavily to avoid regulation and benefits from broad immunity. Her point to counsel: liability changes behavior, and the current incentives are misaligned.

Annick Febrey highlighted forced labor schemes that use tech to recruit people into coercive situations - an estimated 28 million in forced labor worldwide. John Cotton Richmond added the blunt reality: bad actors will keep using tools to commoditize people; policy and enforcement must assume active adversaries, not edge cases.

Legal implications: expand online exploitation risk from "trust and safety" to enterprise risk. Revisit assumptions about immunity, build duty-of-care arguments into design reviews, and write vendor and platform clauses that mandate detection, reporting, and cooperation with law enforcement.

Panel: Corporate responsibility and ethics in the AI era

Maryann Cusimano Love noted that the Church has been engaging with industry and users for years, including the Rome Call for AI Ethics - principles like transparency, inclusion, responsibility, impartiality, reliability, and security/privacy that often start as soft law and later become enforceable standards. See the Rome Call overview from the Pontifical Academy for Life: Rome Call for AI Ethics.

Paul Lekas observed that most agree on the principles; the gap is operationalizing them. Adam Eisgrau underscored the live fight over fair use and training generative models on copyrighted works - the legal and ethical balance remains unsettled. Charles Duan emphasized the translation task: turn principles into machine-understandable guardrails.

For counsel, that means mapping principles to artifacts: model cards, dataset provenance, DPIAs, safety test thresholds, human-in-the-loop criteria, incident escalation, and retention policies. Treat "soft" commitments as future obligations and draft with that in mind.

Lived harms: sextortion and product accountability

Rep. Brandon Guffey shared how his son was targeted by an Instagram scammer and became a victim of sexual extortion; he later died by suicide. Guffey sued Instagram and testified before the U.S. Senate Judiciary Committee, and now advocates against online crimes.

The takeaway for legal teams: child safety, extortion, and design choices around reporting, default privacy, and rapid response are litigation and legislative risk, not just policy pages. Document your child-safety posture like a safety-critical program, with KPIs and board visibility.

Panel: Sustainability and risk management - follow the business, not the politics

David Curran argued business dynamics drive sustainability; the law is behind. Erica Lasdon noted ESG factors are now operational and material. Kevin Tubbs pushed for reframing: focus on serving the need efficiently, not on partisan cues.

Brian Downing flagged a blind spot: data centers. Their footprint is growing, and scrutiny will increase. Counsel should prepare for disclosure expectations, siting controversies, water and energy constraints, and claims tied to AI's demand on infrastructure.

What in-house counsel can do in the next 90 days

  • Codify an AI duty of care: define unacceptable use, human oversight thresholds, red-teaming gates, and incident playbooks. Tie them to sign-offs and audit trails.
  • Stress-test liability assumptions: review immunity positions, product liability analogs, and negligence exposure for foreseeable misuse (e.g., sextortion, trafficking facilitation).
  • Upgrade data and content provenance: require supplier attestations, dataset licensing checks, and model training documentation. Bake these into contracts and DPAs.
  • Operationalize "soft" AI principles: align them with measurable controls (bias testing cadence, explainability thresholds, access controls, privacy-by-default settings).
  • Child-safety posture: enable swift reporting, age-appropriate defaults, evidence preservation, and law enforcement cooperation clauses with clear SLAs.
  • Supply chain due diligence: add forced labor screening, recruitment-fee bans, grievance mechanisms, and audit rights for high-risk geographies and intermediaries.
  • Data center and sustainability risk: assess siting, water and energy contracts, emissions disclosures, and community impact; prepare disclosure language and board updates.
  • Governance and board oversight: define AI and exploitation risk owners, quarterly dashboards, and escalation paths that survive leadership changes.

Resources

Upskilling for legal teams

If you're formalizing AI literacy across legal, privacy, and compliance roles, browse practical programs by job role: Complete AI Training - Courses by Job.

The conference was a collaboration between Catholic Law's Corporate Responsibility and Compliance Program, Law and Technology Institute, and the Bakhita Initiative for the Study and Disruption of Modern Slavery.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide