EU AI Act Countdown: Compliance, Liability, and Safe AI Use in Business

AI can boost IT and security but brings legal duties under the EU AI Act. The white paper guides in-house counsel on risk mapping, GDPR, governance, and contracts.

Categorized in: AI News Legal
Published on: Oct 07, 2025
EU AI Act Countdown: Compliance, Liability, and Safe AI Use in Business

Legal pitfalls when using AI: What in-house counsel need to know now

Artificial intelligence is reshaping IT operations and risk exposure. It can raise productivity, create new service lines, and strengthen security-while introducing legal duties that can't be ignored. A new white paper, "AI in IT: A Legal Minefield," clarifies the most relevant requirements under European and German law and shows where businesses are most exposed.

The message is simple: if your company deploys AI, legal and cybersecurity controls must move in lockstep. The sooner you set clear rules, the fewer surprises you'll face later.

EU AI Act: what's coming and when

The European Union's AI Regulation (EU AI Act) sets horizontal rules for providers and users of AI systems. Core obligations for high-risk systems will apply from August 2026, with some prohibitions and duties arriving earlier. Legal teams should map AI use cases now, not in two years.

  • Determine whether systems fall under minimal, limited, or high-risk categories.
  • For high-risk AI, prepare for risk management, data governance, documentation, transparency, and human oversight requirements.
  • Expect stricter duties for providers, and clear responsibilities for deployers within the enterprise.

Reference texts and timelines are available from the European Commission and official publications of the Regulation. See the EU overview of the AI Act for policy context and updates here.

Key legal questions you must answer

  • Compliance duties: What obligations apply to your role (provider, importer, distributor, or deployer)? How will you evidence compliance to regulators and customers?
  • Personal data processing: Are you meeting GDPR principles (lawful basis, purpose limitation, data minimization, and transparency)? If models learn from personal data, how do you control retention and deletion? See GDPR principles here.
  • Content ownership: Who owns outputs from generative systems used by staff? Do your contracts and policies assign rights and manage third-party IP risk (training data and outputs)?
  • Liability: How do you allocate responsibility if AI causes harm or errors? Do your agreements, insurance, and vendor controls cover foreseeable AI-specific losses?
  • Employee use: Which tools are approved, for what purposes, and under what data-sharing limits? How do you restrict uploads of confidential or personal data?

Practical moves for legal teams

  • Inventory and classify all AI use cases, models, and third-party tools. Map each to risk category under the AI Act.
  • Establish governance: policies for procurement, acceptable use, DPIAs, model change control, and incident response.
  • Tighten contracts: insert AI-specific clauses on training data, IP warranties, audit rights, security controls, and liability caps.
  • Data guardrails: define no-go data for prompts, set redaction rules, and use gateways that log and filter requests.
  • Transparency and records: keep technical documentation, risk logs, and user instructions required by the AI Act.
  • Human oversight: require review checkpoints for high-impact decisions and document decision authority.
  • Security-by-default: integrate cybersecurity standards with legal compliance to reduce model abuse, data leakage, and integrity risks.
  • Training: upskill legal, IT, and business owners on duties by role, not just generic AI concepts.

Why cybersecurity and legal certainty are linked

Weak security turns into legal exposure: data breaches, integrity issues, and untraceable model changes can undermine compliance. The white paper shows why security controls (access, encryption, monitoring, model governance) are essential to meet transparency, safety, and accountability requirements.

What the new white paper delivers

"AI in IT: A Legal Minefield" offers a concise guide for management and IT leaders. It explains the most important legal issues in practical terms, focused on current developments in EU and German law, and shows how to keep AI use safe and controlled across the organization.

Quote

"Current developments in the field of artificial intelligence have created a veritable gold rush in the industry. Despite all the euphoria, it is important to always use this technology in a compliant and secure manner. With our new white paper, we want to support companies in this process. It is intended to provide guidance and, at the same time, make it clear that cybersecurity must be part of every discussion about AI from the very beginning," explains Richard Werner, Cybersecurity Platform Lead Europe at Trend Micro.

Next steps

  • Assign owners for each AI use case; start a simple register and risk classification now.
  • Run DPIAs where personal data is involved; document lawful basis and data flows end-to-end.
  • Align procurement with AI-specific contract language and continuous vendor assurance.
  • Set a timeline to meet AI Act obligations, prioritizing high-impact and high-risk systems.

For teams building internal capability, explore structured AI training by role here.

About Trend Micro

Trend Micro is one of the world's leading IT security providers. With more than 30 years of security expertise, global threat research, and ongoing innovation, Trend Micro protects businesses, public sector organizations, and consumers. Its XGen security strategy combines multiple defense techniques for modern environments, delivering connected visibility across cloud workloads, endpoints, email, IIoT, and networks for faster detection and response.