Meta's Chief Global Affairs Officer, Joel Kaplan, spoke with CNBC’s Andrew Ross Sorkin at Davos 2026, detailing the company's massive commitment to Artificial Intelligence infrastructure and job creation in the U.S. The conversation centered on Meta’s ambitious goal to build "personal superintelligence" and their differentiated approach to the burgeoning AI landscape, particularly concerning open-source models like Llama and the geopolitical race with China.
Kaplan immediately framed Meta’s AI endeavor not just as a technological pursuit but as a significant capital expenditure with national implications. He highlighted the scale of the financial commitment, noting the company is investing an "enormous amount in AI infrastructure and jobs in the United States." This investment underpins their primary objective: the creation of what Kaplan terms "personal superintelligence," which he defines as "putting superintelligence in the hands of everyone." This vision suggests a democratizing force for AI capabilities, contrasting with potentially more centralized approaches.
A core insight from Kaplan’s commentary is Meta’s strategy of leveraging its massive user base—serving 3.5 billion people daily—as a foundational advantage in the AI race. This scale provides a unique testing and deployment ground for their models. Kaplan asserted, "We've got a platform that serves 3.5 billion people every day, so we have some real advantages there." This advantage is crucial for rapidly iterating and refining their AI products, enabling them to bring frontier models to bear more effectively than competitors who lack this direct, massive-scale feedback loop.
The discussion pivoted to Meta’s differentiated strategy regarding model development and deployment. While acknowledging the high visibility of competitors like OpenAI and Google (Gemini), Kaplan emphasized Meta's commitment to an open approach, specifically citing the Llama model. He noted that while they don't discuss financing details publicly, their core strategy is focused on creating an ecosystem where AI is accessible. He pointed out that their approach is different from those who rely heavily on private credit to fund infrastructure buildouts, suggesting Meta's internal resources and existing platform scale provide a more robust foundation.
Furthermore, Kaplan addressed the geopolitical dimension of AI development, framing the technological race as a critical strategic contest. He directly linked Meta’s AI investment to national security concerns, stating, "We're aligned with the administration in the US that it's very important that we win this battle against China." This underscores the high stakes involved, positioning AI leadership as integral to maintaining Western economic security and technological supremacy. He emphasized the need to "clear away the regulatory burdens" affecting data centers and energy to ensure the U.S. can maintain its competitive edge against China.
The conversation also touched upon the regulatory environment, particularly in Europe. When questioned about whether Meta's actions might irritate European leaders accustomed to more stringent regulatory oversight, Kaplan acknowledged the political headwinds but maintained an optimistic stance on collaboration. He conceded that things have become "pretty tense" regarding regulation, but pointed out that the U.S. administration, particularly under the previous administration, had been supportive of removing regulatory burdens on AI development. He concluded by noting that European regulators have been a "huge ally in pushing back on some of the discriminatory regulation that we've seen come out of Europe," suggesting a shared interest in fostering innovation despite differing approaches.
Ultimately, Meta’s strategy, as articulated by Kaplan, appears multifaceted: leveraging massive existing scale, pushing an open-source model philosophy (Llama), and aggressively investing in infrastructure to achieve "personal superintelligence" while navigating complex global regulatory and geopolitical currents. The underlying message for founders and VCs is that the AI race is capital-intensive, and scale—both in compute and user reach—is being treated as a decisive competitive factor.
