US Faces Critical Choice: Can It Regulate AI Without Slowing Innovation?
Eighty-eight percent of organisations are now using AI in at least one business function. Seventy-one percent regularly deploy generative AI. Yet only 31% of AI initiatives have reached full production, and financial gains are concentrated among a small group of leaders.
This gap reveals the core tension facing US enterprises: adoption is accelerating faster than governance frameworks can mature. The question for executives is no longer whether regulation will arrive, but whether the current regulatory trajectory will enable or constrain growth.
The Fragmentation Problem
The US has no single AI regulatory framework. Instead, organisations navigate federal agency guidance alongside emerging state laws and international rules. This fragmentation creates real costs.
George Tziahanas, VP of Compliance and Associate General Counsel at Archive360, says the complexity is often overstated. Many state-level rules share common principles, reducing actual uncertainty.
But others disagree. Peri Kadaster, Chief Communications Officer at Nearform, argues that fragmentation deters scale. "Companies aren't hesitant to experiment with AI; they're hesitant to scale it," she said. Without clear regulatory direction, large organisations adopt a "highest common denominator" approach-designing systems to meet the strictest anticipated standards. This defensive posture slows deployment.
The constraint appears at enterprise level, not in labs. Innovation stalls where governance, compliance, and risk management become critical.
Enterprises Are Building Governance Now
Rather than wait for regulatory clarity, leading organisations are embedding governance into AI strategy from the start.
Tziahanas highlights the shift: companies extend existing compliance frameworks rather than reinvent them. Standards like NIST's AI Risk Management Framework and ISO 42001 provide a foundation.
This represents a move from reactive compliance to proactive design. Instead of treating regulation as an external constraint, organisations build auditability, transparency, and risk controls into systems from the outset.
In financial services, this evolution is most advanced. Nicholas Goble, Director of Solution Architecture at Domino Data Lab, says firms leverage established model risk frameworks rather than waiting for AI-specific rules. "AI doesn't get special treatment," he said.
The challenge is scale. Traditional governance models handled dozens of systems. Now organisations manage hundreds or thousands, many of which change dynamically. Manual oversight processes no longer work. Automation, version control, and continuous monitoring are essential.
Leading organisations treat compliance as a design principle. They build modular systems that adapt to different regulatory environments, turning compliance into a product capability rather than a legal afterthought. This approach may become a competitive advantage.
Europe's Rules Are Becoming the Default
US companies operate in a global market. The EU AI Act's extraterritorial reach means US firms with European exposure must comply regardless of domestic rules.
Tziahanas explains the mechanism: "The EU AI Act has global reach because its penalties apply to global revenues."
This creates a de facto global baseline. Goble observes that in the absence of clear US standards, many organisations default to European requirements. "When you don't have a clear national standard, somebody else's standard becomes the default," he said.
Kadaster describes a paradox: fragmentation at home, convergence abroad. US firms navigate inconsistent domestic rules while aligning with stricter international frameworks.
The effect on competitiveness is mixed. Regulatory pressure has not reduced AI investment. Tziahanas notes that spending remains strong, with regulatory demands often driving more disciplined and scalable approaches.
But there is a risk. Regulatory ambiguity-not strictness-may push innovation offshore. Organisations increasingly move to jurisdictions with clearer expectations, even if compliance requirements are higher.
What Comes Next
A comprehensive federal AI framework is unlikely in the near term. Regulators will adapt existing authorities incrementally rather than introduce sweeping legislation.
But there is broad agreement that clarity is essential. Lisa Sotto, partner at Hunton, said consistency would enable organisations to move faster with confidence.
Kadaster frames clarity as a competitive advantage. "Speed requires confidence, and confidence requires transparency about the guardrails," she said. Well-defined boundaries can accelerate innovation.
Flexibility remains equally important. Overly prescriptive rules risk becoming obsolete before implementation. Tziahanas warns that regulatory schemes must focus on outcomes-accuracy, bias, security-rather than detailed technical prescriptions.
The Real Lesson for Executives
Waiting for regulatory certainty is not viable. Governance, data quality, and risk management must be foundational capabilities, not compliance afterthoughts.
Tziahanas puts it directly: "If a company is waiting for regulatory clarity to adopt AI, they have already lost."
The AI regulation race is not a binary contest between innovation and oversight. It is an ongoing negotiation between speed and stability, risk and reward.
The US retains a significant advantage in innovation through its dynamic private sector and deep technology ecosystem. But without clearer regulatory direction, that advantage becomes harder to sustain as global standards evolve.
Success depends less on predicting regulation and more on preparing for it. Organisations that invest in robust governance frameworks, adaptable architectures, and transparent AI systems will navigate uncertainty most effectively.
The winners of the AI era will not be those who move fastest without constraints, but those who build systems that scale responsibly within them.
AI for Executives & Strategy | AI for Legal
Your membership also unlocks: