Cracking the Legal Code for AI: Key Takeaways for Texas Lawyers
On the September 4 episode of the State Bar of Texas Podcast, attorney Shawn Tuma joined host Rocky Dhir to unpack how Texas is approaching artificial intelligence. The discussion centered on new technology laws that emphasize preventing harm and encouraging ethical use, along with the practical duty of keeping pace with technology.
"Our legislature took a very wise approach, in my opinion, with how they approached the technology legislation this year in that there weren't a lot of fireworks. Most of what we saw was fairly bipartisan at least in purpose and overall design," Shawn said. "They have found a balance for right now with enacting laws that tend to be the things that most people agree on."
What Texas just did
Texas enacted technology-focused measures that set guardrails without overreach. The priority: reduce risk to consumers and businesses, encourage ethical deployment, and create space for responsible innovation.
For legal teams, this means clearer expectations around data use, transparency, and accountability. It also signals growing scrutiny over how AI tools are acquired, implemented, and monitored inside organizations.
Why it matters for legal teams
- Competence now includes technology. Clients expect informed guidance on AI risks, controls, and compliance. Many courts and bar groups are reinforcing this expectation.
- Governance is no longer optional. Policies, human oversight, and documentation will be decisive in regulatory inquiries and litigation.
- Vendors and models carry legal risk. Contracts should address data ownership, output reliability, indemnities, security, and audit rights.
- Privacy and security obligations still rule. AI does not excuse lapses under data privacy statutes or breach notification laws.
- Incident response must account for AI. Model misuse, data leakage, and automated decision errors need defined playbooks.
Practical steps to act now
- Adopt an AI use policy that covers approved tools, prohibited uses, human review, and documentation standards.
- Stand up a lightweight AI governance process: intake, risk assessment, testing, and periodic review for every material AI use case.
- Update vendor due diligence and contracts for AI-specific risks (training data, hallucinations, bias, security, IP, and auditability).
- Map data flows feeding AI tools; restrict sensitive data and apply minimization, retention limits, and access controls.
- Train lawyers and staff on safe use, confidentiality, and verification of AI outputs-then test and refresh that training.
- Extend your incident response plan to cover AI misuse, model errors, and third-party failures; run a tabletop.
About Shawn Tuma
Shawn co-leads the firm's Cyber | Data | Artificial Intelligence | Emerging Technology Practice Group and serves as Office Managing Partner for the Plano, Texas location. He maintains an active practice advising clients on cybersecurity, data privacy, data breach and incident response, regulatory compliance, computer fraud-related issues, and cyber-focused litigation.
Listen to the conversation
Hear the full discussion on the State Bar of Texas Podcast: Cracking the Legal Code for AI.
Helpful resources
- Texas Data Privacy and Security Act overview: Office of the Texas Attorney General
- Competence and technology guidance: ABA Model Rule 1.1
Build practical AI capability
If your team is formalizing AI training to meet client and compliance expectations, explore role-based options here: Complete AI Training - Courses by Job.
Your membership also unlocks: