1X CEO Bernt Bornich on the World Model that Teaches Humanoids New Skills

5 min read
1X CEO Bernt Bornich on the World Model that Teaches Humanoids New Skills

The fundamental challenge in robotics—the inability of machines to generalize skills beyond explicitly programmed or trained data sets—is finally starting to yield to advances in foundational AI models. Bernt Bornich, CEO and CTO of 1X Technologies, recently spoke on Bloomberg about the capabilities of the new World Model powering their humanoid robot, NEO, highlighting a crucial inflection point where robotics begins to follow the scaling laws long observed in large language models. The core message is clear: the path to useful, generalized humanoid intelligence is paved by physical embodiment that allows for learning through real-world, scalable experimentation, rather than reliance on finite human-gathered data.

Bornich spoke with Bloomberg’s Ed Hammond and Caroline Hyde about the latest update to the NEO model, emphasizing its ability to execute tasks it had never encountered before. This generalization capability is key to escaping the brittle nature of traditional industrial robotics, which must be painstakingly re-programmed for every new environment or variable. When asked for an example of a task the updated model could achieve for the first time, Bornich cited a simple but illustrative scenario: picking a Post-it note off a board and reading it. This seemingly trivial action requires a complex chain of perception, planning, and fine motor control that was not explicitly trained. Bornich explained the significance of this breakthrough, stating that the model enables the robot to handle, “anything that you don’t have in your data set, but still being able to have a sensible approach.” This sensible approach, he noted, is the cornerstone of genuine learning, paving the way for robots to teach themselves through experimentation in the real world.

A crucial insight differentiating 1X’s approach is the deep coupling of the AI model with the physical form of the robot—the principle of embodiment. While many robotics labs utilize abstract or highly specialized hardware, NEO is deliberately designed to be as close to a human as possible, a strategy that accelerates the utility of training data. Bornich elaborated on this, arguing that, “NEO has been designed really over the last decade to be as close to a human as possible, because if you take all of the knowledge that we have in the world, like everything you can see on YouTube or any kind of video content, and you train a model on this... if your robot is actually not similar enough to a human, this doesn’t work anymore.” By mirroring the human form, NEO can directly leverage the immense, publicly available video data showing humans performing complex physical tasks, allowing the AI to transfer that knowledge effectively into its physical controls. This strategic design choice bypasses the prohibitive cost and time associated with collecting bespoke, robot-specific training data for every possible task.

The discussion also touched on the competitive landscape, particularly the role of technology giants like Nvidia, whose inference chips power NEO’s brain. While 1X uses Nvidia hardware, Bornich confirmed they maintain their proprietary world models. This decision underscores a key strategic tension in the robotics industry: whether to rely on general-purpose models provided by chipmakers or to build customized, physics-grounded models optimized for the specific kinematics and safety requirements of humanoid hardware. For 1X, the latter is clearly prioritized, especially when considering deployment in human-centric environments, such as homes or public spaces.

Safety is not an afterthought but an integral design component for 1X, addressed through multiple layers of control. This is particularly relevant as robots begin learning autonomously and interacting closely with people. Bornich broke down the safety philosophy into two categories. First, passive intrinsic safety, focusing on the physical design of the machine itself—ensuring it is soft, compliant, lightweight, and low-energy, akin to how humans are intrinsically safe around each other. The second layer is the AI alignment, which ensures the robot consistently chooses the safest path. Bornich emphasized the importance of the model's ability to reason proactively: “The model actively reasons about like, hey, here are the things that I can visualize going wrong here, so I’m going to take this path here, which is the safest path.” This sophisticated risk assessment, embedded within the world model, is critical for real-world deployment where unexpected variables are the norm.

The final, and perhaps most disruptive, core insight Bornich shared relates to the scaling of intelligence in robotics. Traditionally, progress was bottlenecked by the difficulty of gathering real-world data and the reliance on teleoperation to teach robots new skills. However, with the advent of robust world models and embodied learning, this scaling dynamic is inverted. Bornich succinctly stated the new reality: “Your intelligence doesn’t scale with the amount of data you can collect with humans anymore. It actually scales with the number of robots you’ve deployed.” This means that every NEO robot deployed becomes an independent data source, continually improving the model as it performs useful work in diverse environments. This shift democratizes and accelerates the learning process, suggesting that the industry is entering an exponential phase where the deployment volume of humanoids directly dictates the rate of AI progress toward general intelligence. The focus, therefore, shifts from the arduous task of human data collection to the streamlined manufacturing and deployment of capable, safe hardware.