Kazakhstan's First AI Law: What Legal Teams Need to Know
Kazakhstan's new artificial intelligence law is now in force. It classifies neural network systems by risk, sets cybersecurity baselines for high-impact deployments, bans certain uses outright, and clarifies how copyright applies to AI outputs and prompts.
For counsel, this is a compliance build-out, not a press release. Below is a concise breakdown and a checklist you can put to work immediately.
Risk-Based Classification
The law ranks AI systems by level of risk. Systems used by government bodies and in critical sectors are treated as state systems for cybersecurity purposes.
- Expect state-system security controls (e.g., hardened configurations, access control, logging, and testing) to apply to your high-risk AI deployments.
- Map affected systems now: who operates them, where data flows, and which vendors touch them.
- Align internal policies and vendor contracts to state-level security requirements to avoid gaps.
Prohibited Uses
- Manipulation of users.
- Discrimination.
- Emotion recognition without explicit consent.
- Exploitation of people's vulnerabilities.
- Creation of prohibited content.
Review product features, marketing workflows, and data science practices for any functionality that might fall into these categories. Disable or gate features that profile emotions; if permitted with consent, implement clear, revocable consent flows and logging.
Labeling Duties
Content, goods, and services created using AI must be labeled as such. Treat this as a disclosure obligation across customer-facing and B2B touchpoints.
- Add visible AI-origin labels in product UIs, outputs, packaging, and marketing assets.
- Mirror disclosures in terms of service, user guides, and API documentation.
- Ensure downstream partners preserve labels when redistributing outputs.
Copyright and Prompts
Copyright is recognized only where there is human creative contribution. Prompts are protected by law.
- Document human input in creative workflows (who did what, and when) to support ownership claims.
- Update contractor and employment agreements to address human authorship, moral rights, and assignment where applicable.
- Treat prompts as protected works: set internal rules for prompt creation, reuse, and confidentiality.
- Educate teams on where machine output alone may not create protectable rights.
Core Compliance Actions (Start Now)
- Inventory AI systems and classify them by risk; flag those in government-related or critical sectors.
- Benchmark high-risk systems against state-system cybersecurity requirements and remediate gaps.
- Stand up prohibited-use controls: product checks, content filters, and review gates.
- Implement AI labeling across products, services, docs, and partner channels.
- Refresh data and consent policies for any emotion-related or sensitive inferences.
- Update IP policies, training, and contracts to reflect human authorship rules and prompt protection.
- Set up an audit trail: risk assessments, testing results, and decision logs.
- Assign accountable owners for AI governance, with escalation paths to legal and security.
Points to Clarify with Regulators
- How "manipulation," "vulnerabilities," and "prohibited content" are defined in practice.
- The consent standard for emotion recognition (format, scope, and retention).
- Which cybersecurity controls are deemed equivalent to state-system requirements.
- Penalties, grace periods, and audit expectations for noncompliance.
- Treatment of cross-border services and vendors touching Kazakhstan users or infrastructure.
Monitor official guidance and be ready to adjust controls as definitions and enforcement practices are published. Early alignment is the cheapest path compared to last-minute fixes under scrutiny.
Building internal capability for AI governance can help legal teams move faster with fewer surprises. For practical upskilling by job function, see our curated programs: AI Courses by Job.
Your membership also unlocks: