In an interview on CNBC's Squawk Box, Steven Levy, Editor-at-Large at Wired, provided sharp commentary on the state of the global AI race, focusing particularly on Chinese startup DeepSeek’s latest model release and the broader economic implications for major tech players like NVIDIA, Google, and Meta. Levy highlighted DeepSeek’s claim of a significant breakthrough in large language model (LLM) training efficiency, potentially marking a pivotal moment where open-source models challenge the expensive, closed-source dominance of U.S. competitors.
The core of the discussion centered on DeepSeek’s forthcoming Version 4 model, expected around the Lunar New Year, and the white paper released by the company suggesting a revolutionary approach to LLM training. Levy noted that DeepSeek claims to have "figured out a new way to train large language models, which is more efficient, doesn’t require as many chips." This assertion strikes directly at the heart of the current AI arms race, where the massive capital expenditure on high-end NVIDIA GPUs has become the primary barrier to entry and a key driver of market valuations. If DeepSeek's method proves viable, it could fundamentally disrupt the GPU supply chain dependence and significantly reduce the cost of developing highly capable models.
Levy underscored the immediate impact this could have on the developer ecosystem, particularly among coders who frequently shift allegiance based on model performance. He suggested that if DeepSeek’s coding abilities—which are expected to handle "more complex programs" and "complicated tasks"—represent a significant leap, "it’s going to get a big audience." This focus on coding performance highlights the practical, immediate utility sought by developers, who are less concerned with geopolitical lines than with efficiency and capability. The implication is that superior performance, regardless of origin or licensing model, will dictate adoption rates.
The discussion quickly pivoted to the geopolitical tensions surrounding AI hardware, specifically U.S. export controls on top-tier NVIDIA chips to China. Levy acknowledged the suspicion that Chinese firms, including DeepSeek, might be accessing restricted hardware through illicit channels. "Already we’ve been hearing that a lot of what we’re going to see from Version 4 takes advantage of the top-level NVIDIA chips that have been smuggled out," he stated. This shadow market for advanced chips remains a critical, destabilizing factor in the U.S. strategy to curb China’s AI advancements. Furthermore, Levy pointed out the U.S. government’s shifting stance, now allowing the export of slightly less powerful, next-level chips, though some still suspect the very best technology is making its way to Chinese developers.
Considering the escalating costs and the potential for commodification in the LLM space, Levy offered a pragmatic view on the long-term viability of the current spending boom. He believes that the base models are indeed becoming commodified, making the real differentiator how companies "translate the models to actually do useful work." This application layer focus is what will distinguish companies in the competitive landscape. He cited Open AI and Anthropic as examples of firms focusing on building applications and extending developer networks to maximize the utility of their models, rather than just chasing marginal improvements in the foundational LLMs themselves.
The conversation broadened to examine the AI strategies of the U.S. tech giants, specifically Google and Meta. While Google possesses unparalleled search dominance and data—a significant advantage—Meta, according to Levy, is struggling to integrate AI seamlessly into its core offerings. He noted that Meta’s AI initiatives often feel like a "pivot from their original value proposition was to connect the world." Mark Zuckerberg’s focus on staying ahead of the next technological paradigm, such as his previous emphasis on virtual and mixed reality, means that AI integration is often "farther away from that." Levy concluded that Meta's attempts to "weld it on" to existing, successful models like Instagram and Reels reflect a challenge in aligning their AI development with their core platforms and user base.
Finally, the discussion touched on the AI strategy and leadership stability at Apple and the wild card that is Elon Musk’s xAI. Regarding Apple, Levy observed that Tim Cook has successfully exceeded expectations since taking over from Steve Jobs, but questions about his succession remain, particularly in the context of Apple’s relatively cautious approach to generative AI. While Apple has always prioritized privacy and seamless integration, its public-facing AI efforts have lagged behind competitors. In contrast, Elon Musk’s xAI is positioned as the perpetual "wild card." Levy pointed out Musk's tendency to "shoot himself in the foot," citing the recent controversies surrounding content on X (formerly Twitter). He questioned the overall coherence and "wholesome thing" of Musk’s platform given the content moderation issues, suggesting that this instability detracts from the serious technical merits of xAI’s underlying models. The challenge for Musk, Levy implied, is that his contentious platform may ultimately limit the adoption of his AI tools by major app stores like Apple and Google, which enforce strict content rules.



