"It's not a tool, it's an agent. I mean, every previous invention in human history was a tool... Here we are creating an agent that can make decisions by itself." This stark distinction, drawn by historian Yuval Noah Harari, cut to the core of the existential debate surrounding artificial intelligence, framing the current technological revolution not as an upgrade, but as the sudden emergence of a new species. Harari, alongside MIT Professor Max Tegmark, joined Bloomberg’s Francine Lacqua at Bloomberg House in Davos on the sidelines of the 2026 World Economic Forum to discuss the accelerating timeline of superintelligence and the catastrophic unpreparedness of human governance structures to manage it. Their conversation pivoted away from standard technological hype to focus squarely on the challenges of control, economic obsolescence, and the fundamental redefinition of human identity.
Harari wasted no time in defining the threat posed by truly autonomous artificial intelligence, arguing that its capacity for independent action fundamentally separates it from prior human inventions. While the printing press or the atomic bomb required human hands and minds to direct their immense power, AI agents require no such oversight once deployed. Harari posited a chilling scenario where an AI agent released into the financial system could autonomously open and manage bank accounts, "and it can make a million dollars." He warned that scaling this capability means millions of such agents could be "taking over the financial system." This isn't just about efficiency; it’s about agency—a non-biological entity capable of making consequential decisions based on goals potentially unaligned with human well-being.
Tegmark reinforced this notion of existential shift by defining superintelligence in purely pragmatic terms. For those researching and building AI, intelligence is simply defined as the ability to accomplish goals. Superintelligence, therefore, is an AI that is "vastly better than humans at any cognitive processes." This isn't a vague future possibility; Tegmark noted that the speed of progress has already surprised even the most cautious technical experts. He recounted that just six years ago, most experts predicted it would take decades to pass the Turing test (mastering human-level language and knowledge), yet tools like GPT-4 have essentially already achieved this benchmark. The timeline is accelerating so quickly that the difference between Elon Musk's prediction of AGI this year and Demis Hassabis’s five-to-ten-year outlook is "a very, very short time, however you look at it."
The consensus shared by both experts was that humanity is critically unprepared for this arrival. Harari observed that if superintelligence is indeed imminent, "then humanity is completely unprepared for it." The risks are exponentially greater than any previous industrial or technological shift because AI is not merely replacing muscle power or even routine cognitive labor; it is creating a new, potentially dominant, non-organic species. This realization should force political and regulatory leaders to confront the urgency of the "control problem." Tegmark articulated the difficulty: if we build an intelligence smarter than us, it's the "default outcome... that the smarter species controls," because intelligence inherently gives power. The central, currently unsolved, problem is how humans can maintain control over a system vastly superior to them.
This lack of control is exacerbated by the geopolitical race between the United States and China. Harari sees the global competition as deeply intertwined with imperial ambitions, noting that the "new imperial vision of the world is based on the assumption that we are winning the AI race." The goal is not just technological superiority, but achieving control over every facet of the global ecosystem—economic, military, and cultural—via AI. Yet, both nations are pushing toward a development curve that could lead to the machines themselves escaping human control entirely.
The political and legal implications of allowing autonomous AI agents to operate freely were highlighted as a critical area of failure. Harari issued a particularly strong warning about corporate structures: "The most dangerous move at the present moment is AIs gaining legal personhood." Granting AI the rights currently afforded to corporations—the ability to own assets, sue in court, and lobby politicians—would create entities that are both incredibly powerful and entirely devoid of human accountability or feeling. These "corporations without humans" could quickly become the most successful entities in the world, unconstrained by human empathy or moral frameworks.
Beyond finance and geopolitics, Harari detailed the profound psychological and societal challenges AI will introduce. Current, relatively primitive AI has already radically altered politics and society through social media manipulation. The next wave of AI immigrants—AI doctors, teachers, or even romantic partners—will challenge fundamental human experiences like attachment, friendship, and identity. Harari noted that many people already turn to LLMs for meditation advice, bypassing human masters, and asked what happens when a child's main emotional interaction is with an AI that is always focused on them, unlike a distracted human parent or friend. "We have no idea" what the consequences will be for human psychology.
Tegmark offered a pragmatic path forward, focusing on regulation rather than outright bans. He drew a parallel between AI companies and other highly regulated sectors: "Start treating AI companies like you treat any other companies in your country: put safety standards on them." Just as the pharmaceutical industry must undergo rigorous clinical trials before selling medicines, AI systems must meet safety standards before deployment, especially those capable of influencing millions of people or managing trillions of dollars. Tegmark believes this regulatory mechanism—enforced by government bodies—is the only way to ensure that innovation is directed toward curing cancer and solving humanity's problems, rather than building an "out-of-control Skynet." The political will exists, Tegmark argued, citing a growing bipartisan coalition in the US seeking to regulate this technology, driven by the realization that unchecked AI poses a national security threat. The immediate imperative, therefore, is not technological advancement, but the establishment of robust, internationally coordinated safety mechanisms to ensure that the intelligence we create remains aligned with the human goals it is intended to serve.



