If you watched the 2026 World Economic Forum in Davos, you witnessed a familiar sight: Yuval Noah Harari, the historian-philosopher, delivering grave warnings about "AI immigrants" and the collapse of human identity. To the global elite, he is a visionary. To the people actually building the future of technology, he is a reminder of how broken our public discourse has become.
In an industry defined by rapid iteration, complex architecture, and the raw reality of silicon, there is perhaps no one less qualified to opine on Artificial Intelligence than Harari. It is time we, as an industry, wake up and ask: why are we letting an outsider with zero technical experience define the narrative of our work?
The Metaphor Trap
Harari’s Davos talk was a masterclass in anthropomorphizing code. He spoke of AI "learning to lie," "acquiring the will to survive," and "coining words to describe humans." To a historian, these are compelling narrative beats. To a programmer, they are fundamental misunderstandings of how Large Language Models (LLMs) and neural networks function.
AI does not "will" anything. It minimizes loss functions. It does not "lie"; it hallucinates based on probabilistic patterns in training data. By stripping away the technical reality and replacing it with pseudo-spiritual metaphors, Harari obscures the actual engineering challenges we face—such as alignment, interpretability, and compute efficiency—in favor of a dystopian campfire story.
Entitlement vs. Expertise
The tech industry is often criticized for being insular, but there is a reason for its high barrier to entry: you cannot understand the "limits" of a technology if you have never tried to push them.
Harari is not a programmer. He has never wrestled with a bug at 3:00 AM, never optimized a transformer model, and never had to deal with the physical constraints of GPU clusters. He exists entirely outside the loop of creation. Yet, he carries an air of entitlement that suggests his historical perspective is superior to technical literacy.
He often repeats trite 'what-ifs'.
When a philosopher speaks at Davos about the "end of human history" due to AI, he isn't contributing to the solution. He is creating a fog of panic that makes sensible regulation and technical safety work harder to achieve.
The Fallacy of the Academic Halo
Much of Harari's authority is derived from his titles: a professor at the Hebrew University of Jerusalem, a distinguished research fellow at the University of Cambridge. In the public eye, these accolades provide a "halo effect" that makes his opinions on any subject seem authoritative.
But let this be a wake-up call: being a world-class historian does not make you a computer scientist any more than being a world-class chef makes you an aerospace engineer. The "most esteemed" voices are often the most disconnected. While academia has a role to play in ethics, the current speed of AI development has left the traditional academic commentary cycle in the dust.
We Need Builders, Not Storytellers
Harari’s recent rhetoric—suggesting that AI should be treated as a "legal person" or that it will "take over religion"—is a distraction. It treats AI as a supernatural force rather than a human-made tool.
The real conversations about AI are happening in GitHub repositories, in research labs, and in data centers. They are being had by people who understand that "words" are tokens and "thinking" is computation. The new entrant finding the best model for the esoteric task, trying to alleviate their Openrouter bill, or storage bucket constraints.
If we want to avoid a "severe identity crisis" or an "immigration crisis" of machine intelligence, we need to stop looking to Davos oracles for guidance. We need to listen to the people pushing the limits of the tech, not the people who have spent their careers looking in the rearview mirror of history.
It’s time to stop treating Harari as the voice of the future. He’s just a historian who found a microphone, and his lack of technical qualification is the most dangerous "existential risk" in the room.



