Harris Beach Murtha Forms AI Industry Team to Guide Clients Through AI Risk, Deals, and Compliance
Harris Beach Murtha, which has an office in Uniondale, has launched a dedicated artificial intelligence industry team to help clients manage legal, ethical, regulatory, and business issues tied to AI adoption.
The group is led by attorneys Brendan Palfreyman, Timothy Plunkett, and Alan Winchester, and includes more than a dozen lawyers who advise, litigate, teach, speak, and publish on AI topics.
Why this matters for legal teams
Workplace use of AI is rising. Recent polling shows a significant share of U.S. adults now use AI for job tasks and idea generation, while many organizations remain in testing or pilot mode. That gap-high usage with early-stage governance-puts more pressure on in-house counsel to set policy, update contracts, and build controls before risks harden into problems.
Who's leading-and what they bring
- Brendan Palfreyman: Negotiates SaaS deals with AI vendors, drafts and implements internal AI policies, and advises AI companies on regulatory questions. Holds a Generative AI for the Legal Profession certification from UC Berkeley School of Law.
- Timothy Plunkett: An AI Governance Professional who guides businesses and public entities on regulation, compliance, governance, and risk mitigation.
- Alan Winchester: Leads the firm's Cybersecurity Protection and Response Practice Group, with a focus on AI-related security and privacy.
What clients can expect
- AI and SaaS contract review and negotiation, including data use, IP, indemnities, audit rights, and security.
- AI governance frameworks, policy development, and implementation across business units.
- Regulatory analysis and compliance planning across federal, state, and sector-specific rules.
- Privacy, security, and incident response counsel tied to AI systems and data flows.
- Litigation, investigations, and disputes stemming from AI outputs or use.
- Training and briefings for boards, executives, and product teams.
What the firm says
"We've provided AI advice to clients for some time," Palfreyman said. "Forming this industry team creates coordinated counsel that gives our clients an edge."
Practical next steps for GCs and compliance leaders
- Inventory use: List current and planned AI use cases. Note systems, data sources, and business owners.
- Set guardrails: Stand up an AI policy that covers acceptable use, human oversight, disclosure, and records.
- Update procurement: Add AI-specific due diligence and terms to vendor reviews (training data origin, data rights, model updates, security, audit, and offboarding).
- Protect IP and data: Address confidentiality, output ownership, third-party IP risks, and data retention. Build review steps for sensitive uses.
- Bias and safety checks: Define quality metrics, bias testing, red-team procedures, and escalation paths.
- Map compliance: Track applicable privacy laws, sector guidance, and AI-focused rules. Document decisions.
- Train your people: Provide role-specific training for legal, product, security, and operations.
The bottom line
AI is moving fast inside organizations, while formal programs lag behind. Harris Beach Murtha's new team signals growing demand for counsel that blends contracts, privacy, cybersecurity, risk, and practical governance-so legal leaders can move initiatives forward without inviting avoidable exposure.
Want structured learning for your legal or compliance team? Explore curated AI programs by role at Complete AI Training.
Your membership also unlocks: