The stark reality that cutting-edge AI can be "powerful enough to fracture a human skull" highlights the perilous chasm between aggressive innovation and fundamental safety. This chilling observation, central to a recent lawsuit against Figure AI, underscores a critical tension in the burgeoning field of humanoid robotics: the relentless drive to market often collides with the meticulous, slow-burn demands of ensuring human safety.
Matthew Berman, a notable commentator on AI developments, recently dissected a whistleblower retaliation and wrongful termination lawsuit filed against Figure AI by its former Head of Product Safety, Robert Gruendel. Figure AI, a prominent American robotics firm, is at the forefront of developing general-purpose humanoid robots for both domestic and industrial applications. The lawsuit, a public document, lays bare a disturbing narrative of escalating safety concerns met with indifference, and ultimately, punitive action.
Gruendel was recruited into his leadership role at Figure AI just over a year ago, specifically tasked with spearheading the company’s global product safety program. With over two decades of experience in robotics, human-robot interaction, and compliance with international safety standards, he was undeniably a highly credentialed expert. Initially, his contributions were valued, earning him strong performance feedback and a raise within his first year. However, this early harmony soon gave way to discord as Gruendel’s commitment to safety began to clash with Figure's corporate ethos.
The company’s self-declared core values, emphasizing directives such as "Move Fast & Be Technically Fearless," "maintain aggressive optimism," and "bring a commercially viable humanoid to market," painted a picture of a culture prioritizing speed and commercial viability above all else. This philosophy, Gruendel discovered, was not merely aspirational but deeply ingrained. He quickly observed a striking absence of formal safety procedures, incident-reporting systems, or comprehensive risk-assessment processes for the robots under development. Furthermore, Figure lacked dedicated Employee Health and Safety (EHS) staff, relying instead on a contractor with unrelated semiconductor experience.
Gruendel's warnings were not theoretical. He informed CEO Brett Adcock and Chief Engineer Kyle Edelberg that Figure’s humanoid robots were "powerful enough to fracture a human skull" and had already "carved a ¼-inch gash into a steel refrigerator door during a malfunction," narrowly missing an employee. These were not mere speculative risks but demonstrated capabilities. His concerns, however, were consistently treated as "obstacles" to progress rather than "obligations" to protect personnel and future customers.
The unique nature of AI-powered robotics further complicated matters. The lawsuit explicitly states that Figure's Helix AI system, which powers its humanoid robots, "has many risks traditional machine control does not, including hallucinations, unexplainable decisions, self-preservation, and apparent consciousness." This inherent unpredictability of advanced AI models demands an even greater degree of caution and robust safety mechanisms, a fact seemingly lost on Figure's leadership.
Gruendel’s efforts to establish a robust safety framework were met with resistance. He developed a comprehensive safety roadmap, which, despite initial approval from Adcock and Edelberg, faced significant hurdles. Notably, Adcock and Edelberg expressed a "dislike of written product requirements," a stance Gruendel found "abnormal in the field of machinery safety." This aversion to documented safety protocols is particularly alarming in an industry where accountability and replicability are paramount.
The disregard for safety reached an apex with the E-Stop program. Gruendel was tasked with certifying an E-Stop (emergency stop) function, a "critical risk reduction measure" for robots operating in shared workspaces. However, Edelberg decided to abruptly end this project, marking a "foundational shift in safety, as investors had been sold on the E-Stop." Later, a safety feature of the F.02 robot was removed without Gruendel's knowledge, simply because the principal engineer "did not like the aesthetic appearance of the safety feature." When confronted, Adcock claimed he was "too busy to get involved."
Related Reading
- AI's Relentless Ascent: The Frontier Race and Its Grounded Reality
- The AI Race Accelerates Beyond Raw Intelligence to Agentic Ecosystems
- Venture's Dual Reality: Carnage and Value in the AI Wave
Adcock's disengagement became a pattern. He progressively decreased his participation in safety discussions, moving from weekly to bi-weekly, then monthly, and eventually "stopped replying altogether to many of Plaintiff’s messages." This systematic sidelining culminated in Gruendel’s termination on September 2, 2025, just four days after he delivered a detailed written complaint about certification cancellations and the dangers posed by the robots. The official reason given was a "change in business direction" related to Figure's targeting of the home market—a pretext Gruendel alleges was directly related to his safety advocacy.
The lawsuit paints a picture of Figure executives consistently minimizing safety concerns, prioritizing "rapid development timelines and investor demonstrations over compliance and safety." This case serves as a stark reminder to founders, VCs, and AI professionals that the pursuit of technological advancement, no matter how ambitious or transformative, must never come at the expense of human safety. The potential for severe, permanent injury from powerful humanoid robots is not a theoretical problem but a tangible risk that demands unwavering commitment to ethical development and rigorous safety engineering.

