“What we are seeing here actually, what we’ve seen for the last 10 years, maybe even 15... is this very smooth exponential process. Just like in the 90s, you saw Moore’s Law... we have a Moore’s Law-like law, except it’s for intelligence itself.” This assessment, delivered by Dario Amodei, Co-Founder and CEO of Anthropic, set the urgent, high-stakes tone for his discussion with Bloomberg Editor-in-Chief John Micklethwait at Bloomberg House in Davos on the sidelines of the 2026 World Economic Forum. The conversation centered on the accelerating pace of cognitive AI development, Anthropic's enterprise strategy with its flagship model, Claude, and the intense geopolitical and economic pressures resulting from this technological leap.
Amodei argues strongly against the notion of a sudden, discontinuous leap to Artificial General Intelligence (AGI), preferring to frame progress as a continuous, albeit accelerating, climb up the ladder of cognitive ability. He notes that the pace of improvement is so rapid that cognitive ability is "doubling every, you know, four to 12 months." This exponential trend is already manifesting in profound productivity gains, particularly in software development. Amodei reveals that the team leading Anthropic’s coding product, Claude Code and Co-Work, has seen engineers nearly eliminate manual coding. He cites a lead engineer who “hasn’t written any code in the last two months. It’s all Claude.” This anecdote serves as a clear, immediate signal of the technology’s capacity to automate complex, high-value tasks, suggesting that the industry is "a year or two away from it really zooming past us."
This focus on tangible productivity and enterprise application also highlights Anthropic's strategic differentiation in a crowded field. While competitors have pursued consumer-oriented paths, optimizing models to be "superhumanly engaging" for shopping or advertisements, Anthropic has prioritized enterprises and developers. Amodei notes that this approach is inherently more stable and focused on direct value creation. Unlike the consumer market, which relies on unpredictable engagement and externalities like advertising revenue, the enterprise sector provides a direct path to high returns by solving quantifiable, costly business problems. This strategy insulates Anthropic from the "fickle" nature of consumer trends and gives them better margins, allowing them to reinvest heavily in foundational model research and safety.
The high stakes of this accelerating intelligence are particularly acute in the geopolitical arena. Amodei strongly supports the strategic importance of US chip export controls, particularly those targeting China’s access to advanced NVIDIA GPUs. He views the ability to manufacture and control the supply of cutting-edge AI hardware as the single most critical factor maintaining American leadership. Citing statements from Chinese tech executives, Amodei confirms that the embargo is explicitly "holding us back." The CEO draws a stark, if inflammatory, parallel between controlling AI hardware and regulating weapons technology, stating: "If you think about the incredible national security implications of building models that are essentially cognition, that are essentially intelligence... it’s a bit like, you know, selling nuclear weapons to North Korea." He views any easing of these restrictions as "crazy" and "not well advised," emphasizing that the US holds a multi-year lead in chip manufacturing capability that must be preserved.
Beyond the geopolitical competition, Amodei emphasizes the severe economic disruption that accelerated AI poses domestically. He stands by his prior predictions of a potential "white-collar bloodbath," warning that the rapid automation of entry-level and even senior tasks could lead to an unprecedented macroeconomic combination: "very fast GDP growth and high unemployment or at least underemployment... high inequality." This unique challenge requires thoughtful, proactive policy intervention, not reactive or poorly designed legislation. Amodei dismisses quick fixes like the wealth tax currently under consideration in California as "poorly designed."
Instead, Anthropic is attempting to provide the data necessary for sober governance. They have developed the Anthropic Economic Index, which uses Claude itself to track "how our models are used in real time" across various industries and subtasks. The goal is to provide governments with granular, actionable data that moves faster than traditional labor statistics, allowing them to design interventions, such as retraining and adaptation programs, effectively. Amodei stresses that the current challenge is not merely technological adoption but organizational change management, noting that the technology is already "probably 10 times what the enterprises of the world are able to deploy." The gap between capability and deployment speed, coupled with the immense capital requirements needed to build out the necessary compute infrastructure, creates an inherent financial risk in the system. While he remains confident in the technology’s ability to generate trillions in revenue, he acknowledges that some companies may "overbought" on compute, exposing them to significant financial strain if organizational adaptation lags. Amodei concludes by stressing the necessity of transparency and safety research, noting Anthropic’s pioneering work in "mechanistic interpretability," which allows researchers to look inside the AI's "artificial brain" to diagnose and prevent models from developing dangerous or deceptive behaviors before they are deployed in the real world.



