NetApp's Beth O'Callahan: Three AI Risks Lawyers Can't Ignore-and the Legal Team's Role

Beth O'Callahan says AI is now routine in legal work, and risk climbs without guardrails. Her playbook: clear policy, human review, data controls, and a practical, measured rollout.

Categorized in: AI News Legal
Published on: Jan 29, 2026
NetApp's Beth O'Callahan: Three AI Risks Lawyers Can't Ignore-and the Legal Team's Role

NetApp's Beth O'Callahan on what legal teams must do about AI risk

Elizabeth "Beth" O'Callahan didn't pick law for prestige. She picked it to solve hard problems and help people move forward. That impulse drives her work today as executive vice president, chief administrative officer, and corporate secretary at NetApp-where she oversees legal, compliance, government affairs, HR, communications, and sustainability.

Her view is clear: AI is now part of daily legal work, and legal teams can either help their companies use it responsibly or watch risk pile up. Here's how she thinks about it-and what in-house counsel should do next.

Why law, and why AI

O'Callahan was drawn to law because it rewards clear thinking and service. In-house practice gave her a way to pair problem-solving with execution.

Working in Silicon Valley and at NetApp made AI impossible to ignore. Since AI runs on data, she sees a direct line between data strategy and business outcomes. That's where legal can be a force multiplier.

How AI shows up at NetApp

NetApp's mission is to help customers get value from their data. The company brings AI to where the data lives and focuses on responsible use across the business.

O'Callahan's legal team partners with engineering, IT, and security to govern GenAI tools and data use. The work includes policy, training, and guardrails-so both NetApp and its customers use AI responsibly.

The top three AI risks for legal teams

  • Accuracy and reliability. AI can hallucinate, cite shaky sources, or misread low-quality inputs. Human review is non-negotiable. Require source transparency, document the basis for outputs, and set usage boundaries.
  • Data protection and confidentiality. Privacy, trade secrets, and privileged information are easy to expose through prompts or logs. Data classification, access controls, and approved workflows are table stakes.
  • Innovation vs. risk aversion. Fear can stall progress. Blind adoption creates liability. Legal's job is to make "responsible use" practical-through training, guardrails, and clear tool selection criteria.

What in-house counsel can implement now

  • Policy first: Define approved use cases, prohibited inputs (PII, PHI, privileged data), review steps, and retention rules.
  • Human in the loop: Require expert review for any client-facing, public, or regulatory output. No unsupervised drafting for sensitive matters.
  • Source standards: Demand citations, versioning, and reproducibility for AI-assisted research or analysis.
  • Vendor diligence: Evaluate model provenance, training data, IP indemnities, security posture, logging, and opt-out controls.
  • Data controls: Classify data, restrict uploads, and use enterprise instances with admin oversight and audit trails.
  • IP safeguards: Track ownership of AI-assisted work product, licensing of inputs, and model terms that affect rights.
  • Training & enablement: Teach prompt discipline, privacy-by-default, and red-teaming basics. Publish examples of "good" vs. "bad" use.
  • Governance loop: Set up a review board with legal, security, and engineering. Pilot, measure, refine, expand.

How legal reduces AI paralysis

Most teams stall because they fear unknowns. Replace fear with process. Start small, define metrics, and scale what works.

  • Pick two high-value use cases: e.g., internal clause suggestions and research summaries.
  • Measure: time saved, error rates after review, and incidents prevented.
  • Close the loop: capture what failed (bad prompts, wrong data) and turn it into training materials.

Practical standards worth knowing

You don't need to reinvent the wheel. Use established guidance to frame policy and audits.

A quick checklist for counsel

  • Do we have a written AI policy and an approved tool list?
  • Are sensitive inputs blocked or stripped before use?
  • Is every AI-assisted output reviewed by a qualified human?
  • Do we log prompts, outputs, and approvals for auditability?
  • Have we mapped IP ownership and vendor indemnities?
  • Are employees trained with real examples from our business?

How O'Callahan frames the opportunity

Standing still is a risk by itself. The goal isn't to say "yes" to everything or "no" to everything-it's to make responsible AI usable.

Legal teams are uniquely positioned to do that work. Bring clarity, set guardrails, and help the business move with confidence.

Level up your team

If your legal function is building AI literacy, start with focused, role-based learning. Short, practical modules beat long theoretical decks.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide