If you watched the 2026 World Economic Forum in Davos, you witnessed a familiar sight: Yuval Noah Harari, the historian-philosopher, delivering grave warnings about "AI immigrants" and the collapse of human identity. To the global elite, he is a visionary. To the people actually building the future of technology, he is a reminder of how broken our public discourse has become.
In an industry defined by rapid iteration, complex architecture, and the raw reality of silicon, there is perhaps no one less qualified to opine on Artificial Intelligence than Harari. It is time we, as an industry, wake up and ask: why are we letting an outsider with zero technical experience define the narrative of our work?
The Metaphor Trap
Harari’s Davos talk was a masterclass in anthropomorphizing code. He spoke of AI "learning to lie," "acquiring the will to survive," and "coining words to describe humans." To a historian, these are compelling narrative beats. To a programmer, they are fundamental misunderstandings of how Large Language Models (LLMs) and neural networks function.
AI does not "will" anything. It minimizes loss functions. It does not "lie"; it hallucinates based on probabilistic patterns in training data. By stripping away the technical reality and replacing it with pseudo-spiritual metaphors, Harari obscures the actual engineering challenges we face—such as alignment, interpretability, and compute efficiency—in favor of a dystopian campfire story.
