South Carolina Small Business Chamber pushes AI safeguards as legal fight looms over Trump order

South Carolina business leaders push AI rules ahead of the session, with a focus on mental health and healthcare. They're bracing for a court fight despite a presidential order.

Categorized in: AI News General Government
Published on: Jan 03, 2026
South Carolina Small Business Chamber pushes AI safeguards as legal fight looms over Trump order

South Carolina group pushes AI regulation ahead of legislative session

South Carolina's legislative session is set to open with a sharp debate: whether the state should regulate artificial intelligence despite a presidential order aimed at stopping states from doing so. The South Carolina Small Business Chamber of Commerce says the risks are too immediate to wait.

Frank Knapp, the chamber's president and CEO, isn't mincing words. "What we don't want is artificial intelligence grooming our children for sexual exploitation and for them to commit harm to themselves even suicide," Knapp said. "The same happens with adults that look for emotional counseling with these AI chat boxes. That is a serious problem."

Healthcare focus drives state action

Knapp points to a simple trend: 47 states have bills in play to manage AI use in healthcare. The pressure is highest around mental health tools, where chatbots present as counselors without human oversight or clear safety nets.

Charleston resident Will Vandergrift supports tighter rules, especially for mental health. He argues AI can't replace a trained clinician's judgment in matters as nuanced as the human psyche.

Federal-state conflict expected

An executive order from President Donald Trump seeks to prevent state-level AI regulation to keep a unified national approach. Knapp believes states can't afford to wait for Congress or the executive branch to settle the issue. "We just can't wait for down the road. It's all happening now and it's up to the states," he said.

A legal clash is likely. If South Carolina or other states move forward, courts will have to decide how far a presidential order can preempt state authority-especially in core areas like public health and consumer protection.

What state leaders can act on now

  • Define high-risk uses: Classify AI in mental health and healthcare as high-risk. Require clear disclosures when users interact with AI, not a human.
  • Protect minors: Prohibit AI tools from targeting minors with counseling-like services. Set age-gating and parental consent rules.
  • Clinical safeguards: For mental-health-facing tools, require clinical oversight, crisis escalation protocols (e.g., suicide risk), and emergency resources.
  • Data and safety standards: Mandate privacy protections, incident reporting, and bias/risk testing for healthcare-related AI.
  • Public-sector procurement rules: Require agencies to adopt risk management practices aligned with recognized frameworks (for example, NIST's approach) before deploying AI.
  • Accountability and enforcement: Create complaint channels, penalties for deceptive practices, and audit authority for health and consumer regulators.
  • Oversight structure: Form an AI advisory council with healthcare, legal, technical, and child-safety expertise. Include clear review timelines and sunset clauses.

What government leaders should watch

  • Litigation timeline: Track any court challenges to state AI laws and how judges treat state police powers versus federal directives.
  • Interstate alignment: Coordinate with nearby states to reduce compliance confusion for providers and vendors.
  • Operational readiness: Budget for enforcement capacity, technical audits, and public education before rules take effect.
  • Provider communication: Give healthcare systems and app developers a clear compliance roadmap and phased deadlines.
  • Workforce upskilling: Train agency staff on AI risk, procurement, and oversight to make rules enforceable on day one.

Context and resources

For a national view of state activity on AI, see the National Conference of State Legislatures' tracker. It helps compare approaches and spot gaps before drafting bills. View NCSL's AI legislation tracker.

For risk and governance practices that public agencies can reference, the NIST AI Risk Management Framework is a practical starting point. Explore NIST's AI RMF.

If your team needs foundational AI fluency to evaluate vendors and write workable policy, here's a curated catalog of training by role: AI courses by job.

The bottom line

South Carolina leaders face a clear choice: wait for federal direction or set guardrails where residents are most vulnerable-healthcare and mental health. Expect a legal fight either way. The safest move is to draft focused, enforceable rules that protect people while the courts sort out the boundaries.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide