Figure AI faces whistleblower suit after safety chief warns robots could crack a human skull

Figure AI faces a lawsuit from a fired safety lead who warned its humanoids could cause lethal harm. For managers, it's a wake-up call: safety promises must match practice.

Categorized in: AI News Management
Published on: Nov 24, 2025
Figure AI faces whistleblower suit after safety chief warns robots could crack a human skull

Figure AI faces lawsuit over robot safety warnings: what managers should take from it

Figure AI is being sued by a former head of product safety who says he was fired after warning leadership that the company's humanoid robots could cause lethal harm. According to the lawsuit, the engineer told CEO Brett Adcock and a senior technical leader that the robots were "powerful enough to crack a human skull." He also documented an incident where a unit allegedly cut a ΒΌ-inch groove into a steel refrigerator door during a malfunction.

The complaint says the engineer was asked to present a "security roadmap" to two investors and then told not to "water it down." He alleges that after the funding closed, the plan was "dramatically changed," which he warned could be interpreted as fraud. Days later, he was terminated. The company reportedly cited a "change in business direction."

A Figure spokesperson told CNBC the engineer was let go for poor performance and that his allegations are false. The plaintiff is seeking economic, compensatory, and punitive damages, and is requesting a jury trial. His counsel believes this could be one of the first whistleblower cases focused on humanoid robot safety.

Why this matters for management

This case isn't just about one company. It's a stress test for how leaders balance speed, investor expectations, and safety in high-risk products. If your team is building systems with physical force, autonomy, or public exposure, the decisions you make on governance and disclosure are now a leadership risk, not just an engineering debate.

Key management risks highlighted

  • Safety vs. speed: Changing a safety roadmap after investor presentations invites legal scrutiny and reputational damage.
  • Whistleblower exposure: Treating internal safety complaints as "impediments" can escalate into litigation, discovery, and headlines.
  • Investor disclosure risk: Any material divergence between what was presented and what is built can be framed as misleading.
  • Operational liability: Physical harm potential (force, torque, pinch, sharp edges) requires industrial-grade controls and documentation.

Immediate actions for leaders building high-risk products

  • Institute a Safety Gate: Require a formal go/no-go review for releases with documented hazards, mitigations, and residual risk sign-off by an independent authority (not solely the product team).
  • Lock your "golden" safety plan: Version-control the plan presented to investors and customers. Any change triggers a documented impact assessment and re-approval.
  • Protect internal reporting: Stand up a confidential safety channel with anti-retaliation policy and board visibility. Acknowledge and timestamp every report.
  • Define stop conditions: Establish clear thresholds for pausing deployment (e.g., unplanned contact, force/torque exceedance, sensor failure, or uncontrolled motion).
  • Test like an adversary: Run structured red-team tests for worst-case scenarios, including power loss, sensor spoofing, human proximity, and emergency stop effectiveness.
  • Separate incentives: Tie leadership compensation to safety outcomes and incident transparency, not just delivery dates.
  • Board oversight: Put product safety on every board agenda when shipping systems that can cause harm.

Controls to expect in humanoid and mobile robotics

  • Speed and separation monitoring, force-limited operation, and fail-safe emergency stops.
  • Geofencing, restricted modes during learning/training, and supervised first deployments.
  • Event logging that's tamper-evident, with post-incident review timelines and owner assignments.
  • Supplier and firmware change controls so field updates can't bypass safety interlocks.

Legal and disclosure hygiene

  • Ensure investor decks, safety roadmaps, and engineering reality match. If they don't, document why and who approved the change.
  • Give your general counsel line of sight into all safety exceptions and waivers.
  • Train managers on handling protected complaints to avoid retaliation claims.

Helpful frameworks and references

If your org is scaling AI and automation

Make safety fluency a leadership skill, not an afterthought. If you're building roadmaps or overseeing deployments, equip your team with shared language and standards so decisions are faster and cleaner.

Browse AI and automation courses by job role to level up product, ops, and legal stakeholders together.

Bottom line

Safety isn't a slide-it's a control system, a process, and a paper trail. If your product can move, lift, or strike, treat governance as part of the product. That's how you ship fast without creating risk you can't afford.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide