A lawsuit filed by a former safety executive accuses Figure AI of firing him after he raised concerns about the dangerous force of its humanoid robots. The startup, recently valued at $39 billion, now faces whistleblower claims that its machines pose serious safety risks and that leadership buried those warnings.
Figure AI robots are allegedly strong enough to cause lethal injuries

Robert Gruendel, former head of product safety, claims he warned leadership that the company’s humanoid robot could deliver enough force to fracture a human skull. In one reported case, a robot gashed a steel fridge door open. Instead of addressing the concern, executives allegedly labeled it an “obstacle.”
Days after documenting those complaints, Gruendel says he was fired.
Lawsuit targets Figure AI over safety suppression and investor optics
The lawsuit states that leadership instructed Gruendel to create a safety roadmap for potential investors, but he claims they gutted the plan shortly after closing a major funding round. That timing, he argues, suggests Figure AI downplayed known risks potentially defrauding investors.
Backed by Nvidia, Microsoft, and Jeff Bezos, the company has grown at record speed. Its valuation has increased 15 times since early 2024, and it plans to deploy 200,000 robots by 2029.
Figure AI denies claims, blames poor performance
A spokesperson for Figure AI said Gruendel was let go for performance issues, not retaliation. The company intends to “thoroughly discredit” the lawsuit in court. Still, his legal team frames this as one of the first safety whistleblower cases involving humanoid robots.
What the lawsuit demands
Gruendel is asking for:
- Economic damages
- Compensatory damages
- Punitive damages
- A jury trial
Humanoid robots, human consequences
Whether the case succeeds or not, it pushes a crucial question into the spotlight: How do we hold companies accountable when their robots might be strong enough to kill?

