Kazakhstan's AI law is now in force: key points for legal teams
As of January 18, Kazakhstan has enacted a comprehensive law on artificial intelligence. It ranks neural-network systems by risk, imposes state-grade cybersecurity on high-risk deployments, bans several harmful use cases, mandates labeling of AI output, and clarifies how copyright applies - including protection for prompts.
Risk-based classification
The law classifies AI systems by level of risk. The highest tier covers systems used in government bodies and critical sectors, which are now treated as state systems for cybersecurity purposes.
- High risk (government and critical sectors): Must comply with state-level cybersecurity requirements equivalent to government systems.
- Other tiers: Still subject to risk-appropriate controls, documentation, and oversight consistent with the law's framework.
Prohibited AI practices
- Manipulation of users.
- Discrimination.
- Emotion recognition without consent.
- Exploitation of people's vulnerabilities.
- Creation of prohibited content.
These prohibitions affect product design, data collection, and model deployment. Review any features that infer emotions, target sensitive groups, or rely on persuasive optimization without meaningful consent.
Cybersecurity obligations at the highest tier
AI used in government and critical sectors is equated to state systems for cybersecurity. Expect strict access controls, monitoring, incident response, and documentation on par with government information-security standards.
- Map where AI components touch critical infrastructure or public-sector workflows.
- Align controls, logging, and testing with state-system expectations.
- Run threat modeling for model inputs/outputs (prompt injection, data exfiltration, model abuse).
Labeling and transparency
Content, goods, and services created using AI must be labeled. This extends beyond text and images - think product features, chat assistants, recommendations, and any customer-facing output that relies on AI.
- Define what "created using AI" means in your context and document thresholds.
- Add visible labels for AI-generated content across web, app, and physical product touchpoints.
- Update marketing and UX copy to maintain consistent disclosures.
Copyright and prompts
Copyright is recognized only where there is a human creative contribution. Purely machine-generated output without meaningful human authorship may not qualify.
Prompts are protected by law. Treat prompts as creative inputs and handle them as protected assets in contracts, internal policies, and NDAs.
- Add "human-in-the-loop" standards for creative works and keep evidence of human contribution.
- Clarify ownership and permitted use of prompts created by employees, contractors, and users.
- Address prompt sharing, reuse, and security (to prevent leakage of trade secrets).
Who is most affected
- Government vendors and critical-sector operators: Immediate cybersecurity uplift for AI components.
- Consumer platforms, marketing, and product teams: Labeling obligations and bans on manipulative or discriminatory features.
- HR tech and safety tools: Remove or gate emotion recognition unless you have informed consent and a lawful basis.
Practical next steps for in-house counsel and compliance
- Create an AI system inventory, tagged by use case, user impact, data sensitivity, and sector.
- Assign a risk tier per system; flag anything in government or critical workflows as high risk.
- Gap-assess cybersecurity controls for high-risk systems against state-level requirements.
- Deploy AI labeling across all relevant customer and employee touchpoints.
- Revise product and data policies to prohibit manipulation, discrimination, and vulnerability exploitation.
- Update IP policies and contracts: define human authorship standards, prompt ownership, and usage rights.
- Train teams (legal, product, marketing, data) on consent requirements for emotion-related features.
- Set up third-party and vendor review: ensure suppliers meet these obligations, especially for embedded models.
Policy and contract language to add
- AI Use Policy: Risk tiering, prohibited practices, labeling rules, consent standards for emotion inference.
- Security Addendum: State-system controls for high-risk deployments; incident reporting; audit rights.
- IP and Content Terms: Human authorship criteria, prompt ownership and confidentiality, rights over AI-assisted output.
- Vendor/DPAs: Representations on non-manipulative behavior, discrimination safeguards, and transparency commitments.
If your teams need structured guidance on prompt creation and stewardship - especially with prompts now protected - consider internal training supported by curated prompt courses.
The bottom line: classify your systems, harden security where required, label AI output, and tighten IP and product policies. The sooner you operationalize these basics, the lower your legal exposure under Kazakhstan's new law.
Your membership also unlocks: