AI Regulations Are Coming Fast. Here's What You Need to Do Now
Governments worldwide are moving AI regulation into effect this year at a pace unseen with previous technologies. China enacted AI laws in 2023, the EU's AI Act took effect in 2024, and U.S. states are passing rules faster than they did with privacy laws. The result: a fragmented regulatory environment that enterprises must navigate immediately.
Fifty different AI laws are either in effect or scheduled to take effect soon across 19 U.S. states alone, according to Taft Law. Washington, Arizona, New Mexico, Nebraska, and Massachusetts are among states still developing their own rules. A December executive order from the White House attempted to slow state-level action by directing the U.S. Department of Justice to block enforcement, but states are pushing forward anyway.
Federal legislators are also promoting AI bills, but cooperation between state and federal governments remains absent. "The landscape will definitely get more complicated before it simplifies," according to industry experts.
Start With Frameworks, Not Laws
Rather than chasing every new regulation, companies should adopt a different approach: build to a recognized framework first, then adapt to specific laws as they emerge.
Three frameworks merit attention: the NIST AI Risk Management Framework, ISO 42001, and the OECD AI Principles. Building to a framework is easier operationally than building to a law, and it gives legal teams a foundation to map against regulatory requirements as they change.
Even comprehensive laws like the EU AI Act shouldn't trigger immediate full compliance efforts. The EU is currently working on the Digital Omnibus legislation, which will modify the AI Act to reduce administrative overhead and potentially delay some provisions by one or two years. Investing heavily in compliance now risks wasted effort.
What companies should do: build control frameworks based on NIST standards and invest in AI literacy training for employees. This creates a foundation that adapts as regulations settle.
Plan for Real-Time Monitoring, Not Periodic Audits
AI creates a compliance problem that doesn't exist with traditional software. A model that works correctly today may give different answers to the same questions tomorrow. Testing and deploying once doesn't work.
Periodic audits and compliance checkpoints are insufficient. Organizations need real-time monitoring of AI decisions to catch anomalies immediately. This is fundamentally different from how companies have approached compliance in the past, but it's now required.
Companies that have invested in cybersecurity, IT, and privacy controls are already positioned well for this shift. Those foundations translate directly to AI governance.
What's Actually Regulated Right Now
Europe: The EU AI Act bans certain applications outright-harmful behavior manipulation, social scoring, and some facial recognition uses. High-risk AI systems affecting health or employment face stringent requirements: risk management systems, data governance, auditing, human oversight, and ongoing quality management. Penalties reach 7% of global revenues. The law applies to any company serving EU customers, not just European firms.
China: Laws require security assessments, content moderation, and data localization. New rules covering AI model security, data protection, and algorithm registration took effect in 2025. Draft rules on autonomous AI agents are expected this year. Foreign companies offering AI services in China face strict requirements.
United States: No federal law exists. State rules vary widely. Utah requires disclosure when customers interact with AI instead of humans. Texas restricts AI designed to manipulate behavior or exploit children. Illinois requires employers to notify people affected by AI in hiring or promotion decisions and bans discriminatory AI systems. California requires training data transparency, safety frameworks, incident reporting, and watermarking of AI-generated content. Colorado covers high-risk AI in employment, healthcare, and financial services.
Other regions: South Korea's AI Basic Act addresses transparency and high-risk systems. Singapore launched the first global framework specifically for autonomous AI agents in January. Japan's AI Act provides government guidance rather than penalties. India released voluntary AI governance guidelines in February.
The Next Regulatory Front: Autonomous Agents
Current regulations focus on chatbots, privacy, and the accuracy of individual AI decisions. But AI is shifting toward autonomous agents-interconnected systems where each agent carries out tasks, accesses data, and interacts with other agents.
This creates a new problem: kill switches don't work. Some regulations require the ability to turn off AI functionality and revert to previous systems. With autonomous agents, the best companies can do is disable individual AI functions, not the entire system. This introduces significant risk.
New legislation addressing autonomous agents will likely emerge by 2027, according to industry experts. Regulators need time to understand the technology and develop rules. Companies should monitor this space closely.
Other Laws Still Apply
AI systems remain subject to existing regulations: data security laws, privacy laws, laws about automated decision-making, and copyright law. Consumer protection laws apply too. If an AI system deceives customers, it can trigger legal scrutiny regardless of whether specific AI regulations exist.
For strategy leaders, the practical implication is clear: AI governance isn't a separate compliance function. It sits at the intersection of data, privacy, security, employment law, and consumer protection. Organizations need cross-functional teams to manage this.
- Don't wait for regulations to stabilize before acting. Build to a framework now.
- Implement real-time monitoring of AI systems, not periodic audits.
- Map your AI governance to NIST, ISO 42001, or OECD standards as a foundation.
- Train employees on AI literacy and compliance requirements.
- Monitor state and international regulations in markets where you operate.
- Prepare for autonomous agent regulations coming within the next two years.
The regulatory environment will remain fragmented and unstable for the next few years. Companies that build flexible governance frameworks now will adapt faster than those waiting for clarity.
Your membership also unlocks: