AI Washing Is Now a Board-Level Liability Issue
Companies face accelerating enforcement action for overstating artificial intelligence capabilities. The SEC, DOJ, and FTC have all launched investigations. Private shareholder lawsuits alleging AI misrepresentation have doubled year-over-year. Directors now face personal liability under a "knew or should have known" standard if they approve AI claims without verifying their accuracy.
This is no longer theoretical risk. The enforcement pattern is clear, bipartisan, and accelerating.
The Enforcement Record
In March 2024, the SEC opened its AI washing enforcement program with simultaneous actions against two investment advisers: Delphia (USA) Inc. and Global Predictions Inc. Both firms had claimed to use sophisticated AI in investment decision-making. Investigation revealed the claims substantially overstated AI's actual role. Delphia paid $225,000; Global Predictions paid $175,000.
In January 2025, the SEC brought its first action against an operating company: Presto Automation, a restaurant technology firm. The company had claimed proprietary AI technology when the technology was actually owned and operated by a third party. When Presto later developed its own system, it falsely claimed to have eliminated third-party dependence even though substantial third-party components remained. The SEC also found that the vast majority of drive-through orders required human intervention, contradicting the company's public statements.
Criminal prosecution followed. In June 2024, the SEC and DOJ jointly charged Ilit Raz, founder of recruitment startup Joonko Diversity Inc., with securities fraud for claiming sophisticated AI technology that largely did not exist. In April 2025, they charged Albert Saniger, former CEO of Nate Inc., for raising over $42 million by falsely claiming his mobile shopping app used AI to autonomously complete purchases when it relied on teams of overseas contractors to manually process transactions.
The FTC launched "Operation AI Comply" in September 2024 with five simultaneous enforcement actions. In August 2025, the FTC filed suit against Air AI for marketing an agentic AI tool as capable of autonomously replacing human sales staff and generating increased profits. Estimated consumer losses reached $250,000 per affected business.
Private litigation is accelerating faster than government enforcement. Securities class actions alleging AI misrepresentation increased approximately 100 percent between 2023 and 2024. Through 2025, 51 AI-related securities class actions were filed. The most prominent case involves Apple Inc., where shareholders alleged that the company's June 2024 announcements about Apple Intelligence and advanced Siri capabilities misled the market when Apple allegedly had no functional prototype of the advertised features and subsequently delayed them to 2026. Apple's stock lost nearly one-quarter of its value-approximately $900 billion in market capitalization.
Other targets include AppLovin Corporation, Innodata Inc., Oddity Tech, DocGo Inc., and Evolv Technologies. A March 2025 ruling in the Southern District of New York denied DocGo's motion to dismiss, finding that allegations of AI capability misrepresentation were sufficiently pleaded to proceed.
Why Traditional Compliance Fails
AI washing exploits gaps that conventional oversight cannot reach. Legal departments understand disclosure requirements but lack technical expertise to evaluate AI capabilities. IT departments understand technology but may not grasp securities law implications. Marketing teams craft public messaging without comprehending technical limitations.
No single executive typically owns the complete picture: what AI systems exist, how sophisticated they truly are, what claims are made about them, and whether those claims are accurate. Boards lack metrics to evaluate management's AI representations or compare against competitors.
Only 25 percent of organizations have fully implemented AI governance programs. Just 27 percent of boards have formally incorporated AI governance into committee charters. This gap between awareness and execution is where AI washing thrives.
The SEC applies traditional antifraud provisions to AI claims but provides limited forward guidance on required disclosures or quality standards. Boards need proactive governance tools, not reactive compliance responses.
The Governance Solution: Quantified AI Quality Metrics
Standardized, quantitative AI governance metrics function as governance assurance mechanisms comparable to Sarbanes-Oxley internal controls. These metrics enable boards to verify management's AI claims based on objective, audited benchmarks.
Such metrics must be quantitative and normalized, enabling meaningful comparisons across organizations and against industry benchmarks. They must be independently verifiable through audit, not merely self-reported. They must assess AI comprehensively across multiple dimensions: governance maturity, technical robustness, responsible AI practices, strategic alignment, and organizational adaptability.
The AIQ Score framework, developed by AIQA Global LLC, illustrates one comprehensive approach. It assesses 250 proprietary data points across five weighted dimensions. Governance & Accountability carries the highest weight because governance failures are the primary driver of AI-related loss events. Strategic Alignment assesses whether AI is genuinely embedded in business strategy or merely a marketing claim. Technical Robustness evaluates whether AI systems actually work as described, including model validation, security testing, and bias audits. Responsible AI & Compliance measures alignment with the EU AI Act, NIST AI Risk Management Framework, and emerging disclosure requirements. Adaptability & Education captures whether organizations maintain feedback loops and incident response protocols.
The methodology uses a 0-200 scoring scale, modeled on IQ testing and existing patent rating systems. This provides sufficient granularity to differentiate governance maturity. Organizations scoring 115 or above may qualify for independent certification, representing third-party validation of AI governance quality.
Whether using this specific framework or comparable methodologies that emerge, the critical element is independent verification through structured audit. Organizations submit quantitative surveys and documentation, which independent analysts validate through model inspection, metadata review, and audit trail examination-resembling financial audits where companies make representations and independent auditors verify claims.
Capital Markets and Insurance Implications
Quantitative AI governance scores enable capital markets applications beyond individual company governance. The emerging AIQA 100 Opportunity Index selects U.S.-listed companies based on proprietary AI quality scores and adoption opportunity assessment, applying the same construction discipline used for the NYSE-listed Ocean Tomo 300 Patent Index.
For insurers, the emerging underwriting standard for affirmative AI coverage requires three elements: bounded use-case definitions, measurable performance KPIs, and evidence of ongoing monitoring. Quantitative governance scores provide precisely this evidence. Companies achieving high scores may qualify for preferential AI liability insurance, a significant consideration as the AI insurance market bifurcates between specialist affirmative coverage and broad exclusions by major carriers.
D&O insurance may not provide complete protection against AI washing liability. Policies typically exclude coverage for fraudulent or intentional misrepresentation. If prosecutors or plaintiffs establish that directors approved AI-related disclosures without a reasonable basis to believe their accuracy, insurers may deny coverage. Even where coverage applies, reputational damage from enforcement actions often exceeds financial penalties.
The Chief Intellectual Property Officer as Governance Owner
The Chief Intellectual Property Officer uniquely bridges technical AI validation and legal disclosure requirements, positioning this role as essential for boards seeking proactive AI washing prevention.
The CIPO role emerged as intangible assets came to dominate the economy. Reporting directly to the CEO, the CIPO provides centralized oversight of all IP activities, including portfolio administration, litigation management, licensing strategy, M&A considerations, and IP monetization. As AI becomes the dominant form of intangible capital, the CIPO role naturally expands to encompass AI asset management.
The CIPO bridges the technical-legal divide that confounds traditional legal counsel. CIPOs understand both the technology underlying AI systems and the legal frameworks governing disclosure. The CIPO's strategic focus on value creation aligns with the emphasis that quantitative governance frameworks place on measurable business impact. The CIPO's C-suite positioning provides necessary authority for cross-functional coordination. The CIPO's focus on both protection and monetization of intangible assets aligns with a dual emphasis on governance and value creation.
In organizations without a CIPO, oversight responsibility may fall to the Chief Technology Officer, Chief Information Officer, General Counsel, or a newly designated Chief AI Officer. Regardless of title, a single executive must own the complete picture of AI capabilities, claims, and governance, bridging the organizational fragmentation that enables AI washing.
Integration with Board Committee Structure
AI governance metrics managed by the CIPO integrate naturally into existing board committees, providing each with relevant AI quality information.
Audit Committee: Receives quarterly reporting on governance and compliance scores, focusing on disclosure controls and substantiation of AI-related statements in SEC filings. Reviews documentation supporting AI claims and evaluates the adequacy of internal controls around AI representations.
Risk Committee: Monitors technical robustness and responsible AI scores, assessing governance maturity and operational risk exposure. Evaluates AI-related risks, including bias, privacy violations, cybersecurity vulnerabilities, and regulatory compliance gaps.
Technology/Innovation Committee: Reviews strategic alignment and adaptability scores, evaluating competitive positioning and return on AI investments. Benchmarks the company's AI maturity against industry peers and assesses strategic AI initiatives.
Full Board: Receives a comprehensive composite score reporting quarterly, analogous to financial performance reviews. Uses scores to evaluate management's AI strategy execution and benchmark progress against competitors.
Implementation Steps for Boards
Step 1: Mandate Management Certification. Require the CIPO or equivalent executive to certify quarterly that all AI-related disclosures in SEC filings, earnings calls, investor presentations, and marketing materials are factually substantiated and supported by documentation. This certification creates personal accountability analogous to SOX financial certifications and establishes the "reasonable steps" safe harbor against enforcement.
Step 2: Integrate AI Governance Metrics into Enterprise Risk Dashboards. Include AI governance score trends in regular board reporting alongside cybersecurity metrics, ESG performance, and financial KPIs. Establish threshold scores requiring board notification if performance deteriorates. Track competitive positioning by benchmarking against industry peer scores.
Step 3: Establish Clear Committee Responsibility. Assign AI oversight formally, incorporating AI governance into existing committee charters. Ensure that at least one director on the responsible committee possesses AI literacy or a technical background. Consider engaging periodic third-party briefings on AI developments and governance best practices. Update board education programs to include AI governance topics.
Step 4: Tie Compensation to AI Integrity. Link executive compensation to maintenance or improvement of AI governance score thresholds. Include AI governance metrics in annual CEO and CIPO performance evaluations. Create alignment between incentives, ensuring executives prioritize genuine AI excellence over inflated marketing claims.
Step 5: Report AI Governance Scores in Public Disclosures. Enhance transparency and investor trust by publicly disclosing verified AI governance metrics in ESG reports or annual reports. Include an independent certification demonstrating a third-party audit. Use verified scores as competitive differentiation in capital raising and investor communications. This public commitment creates reputational incentive for accuracy while demonstrating governance maturity to regulators.
Step 6: Prepare for Multi-Agency Exposure. Recognize that AI-related claims face scrutiny from multiple enforcement bodies: the SEC, DOJ, FTC, and state attorneys general, as well as private shareholder litigation. Ensure that compliance procedures address not only securities disclosure but also consumer protection, employment law, and sector-specific regulatory requirements. Incorporate potential parallel proceedings into incident-response playbooks.
The Regulatory Landscape
The European Union's AI Act, which entered into force on August 1, 2024, represents the first comprehensive legal framework governing AI systems. Prohibited AI practices became effective February 2, 2025, with full enforcement for high-risk AI systems taking effect in August 2026. Noncompliance carries fines up to €35 million or 7 percent of worldwide annual turnover. These requirements establish a transparency baseline that will influence global expectations even for U.S. companies.
The United States lacks comprehensive AI regulation comparable to the EU Act, though the regulatory environment is rapidly evolving. In February 2025, the SEC rebranded its Crypto Assets and Cyber Unit as the Cyber and Emerging Technologies Unit (CETU), explicitly tasking it with combating AI-related fraud. The SEC's Division of Examinations incorporated AI as a top priority in its 2026 Examination Priorities, signaling it will closely examine companies' use of AI and automated technologies, scrutinizing whether related disclosures are accurate and whether firms have implemented adequate policies and procedures to monitor AI use.
In 2025 alone, 1,208 AI-related bills were introduced across all 50 states, of which 145 were enacted into law. California's AB 2013, effective January 1, 2026, mandates that generative AI developers publish training data summaries. SB 942 requires AI-generated content labeling. A December 2025 Executive Order sought "minimally burdensome" national standards to prevent state laws from obstructing innovation, creating ongoing tension between federal and state approaches.
From Liability to Competitive Advantage
AI washing is no longer speculative. It is a recognized regulatory and reputational risk reaching the boardroom. Investors, regulators, and insurers increasingly demand assurance that AI claims reflect auditable facts rather than marketing optimism.
The stakes are significant. Failed AI claims damage market credibility and shareholder value. Misleading disclosures trigger SEC enforcement, shareholder litigation, and potential criminal prosecution. Directors face personal liability under the "knew or should have known" standard if they approve AI-related disclosures without a reasonable basis to believe their accuracy.
Quantitative AI governance metrics offer the clearest path forward. Such metrics transform AI governance from reactive compliance to proactive assurance, functioning as governance infrastructure comparable to SOX internal controls. The framework enables directors to fulfill their fiduciary duties of care and loyalty by implementing systematic oversight of AI quality and accuracy.
But quantitative governance metrics provide more than defensive protection. They create competitive advantage. Companies with verified AI excellence can credibly differentiate themselves in capital markets. Independently audited scores enable legitimate innovators to separate from companies making inflated claims. Organizations achieving high governance scores position themselves for preferential AI liability insurance, reducing risk management costs while demonstrating governance maturity.
The board's role is decisive. Directors who implement AI governance frameworks now position their organizations as trusted AI leaders rather than suspected AI washers. Those continuing to rely on unverified management assertions face mounting enforcement risk, competitive disadvantage as peers adopt standardized metrics, and potential exclusion from capital markets demanding AI transparency.
Intangible assets now comprise 92 percent of S&P 500 market value, up from 68 percent in 1995. AI dominates intangible capital. Governance must evolve to measure what matters most: the quality and integrity of AI itself.
The convergence of enforcement pressure, regulatory development, and investor scrutiny creates a decisive moment for board leadership. The choice is between verified AI excellence and unsupported AI assertions. Only the former represents a sustainable strategy for boards committed to fiduciary responsibility, market credibility, and competitive advantage.
Learn more about AI for Executives & Strategy and AI for Legal professionals.
Your membership also unlocks: