AI infrastructure, the foundational layer enabling the generative artificial intelligence boom, is predicted to continue leading market returns through 2026, but the critical investment bottlenecks are rapidly shifting beyond the immediate compute stack, according to Clare Pleydell-Bouverie, Co-Head of the Global Innovation Team at Liontrust Asset Management. The firm is operating on a clear playbook: identifying and investing behind the constraints that emerge as AI systems scale exponentially.
Pleydell-Bouverie, speaking on CNBC’s Worldwide Exchange, outlined a three-stage evolution of infrastructure investment, moving from silicon components to high-speed connectivity and, critically, to the raw energy supply required to keep these massive systems operational. She emphasized that the strategy is driven by identifying where the next constraints emerge, noting, "This is a playbook that’s worked very well over the last three years: investing in the bottlenecks, particularly when it comes to AI infrastructure because we’re of the firm belief that AI infrastructure will continue to lead the market in 2026." The initial phase saw massive returns in the components necessary to build the AI clusters—memory, compute, and storage—a phase exemplified by the performance of companies like TSMC, NVIDIA, and AMD. However, as the industry pushes toward increasingly large and interconnected models, the limiting factors are migrating up the stack.
The immediate next constraint, which Pleydell-Bouverie expects to emerge in earnest over the next two years, lies in networking. The challenge is not just connecting servers within a single rack, but linking vast clusters of GPUs (Graphics Processing Units) that must operate as a single, coherent computational unit. This is the difference between "scale-out networking," which is relatively straightforward, and "scale-up networking." As AI clusters scale from 100,000 chips today toward a million chips tomorrow, the internal communication requirements grow non-linearly. The data transfer speeds and efficiency needed to prevent latency from hobbling training performance necessitates a complete overhaul of the optical and fiber systems connecting the hardware.
The shift to scale-up networking—connecting these immense clusters over long distances, often across state lines—represents a massive opportunity. Pleydell-Bouverie cited Ciena as a key beneficiary in this transition, describing them as "the railroads for this AI traffic." Their role is to provide the high-powered optical gear and dense interconnect solutions necessary for hyperscalers to link disparate data centers, turning multiple facilities into one massive, contiguous computational engine capable of handling the intense bandwidth demands of next-generation large language models. The technical complexity and capital intensity of solving this connectivity problem secure the moat for companies capable of delivering reliable, high-speed optical fiber solutions at scale.
Beyond the digital bottlenecks of networking and silicon, the most fundamental constraint facing the continued expansion of AI is physical energy. Data center power demand is skyrocketing, driven by the sheer density and continuous operation of GPU clusters. Pleydell-Bouverie stated plainly that "the US grid is essentially sold out," forcing hyperscalers and data center operators to fundamentally rethink their energy sourcing strategies. This realization shifts investment focus toward "behind-the-meter" power solutions, meaning power generation located directly on-site or dedicated to the data center, bypassing stressed utility grids.
While gas turbines offer a quick deployment solution—a trend evident in the extensive backlog of companies like GE Vernova—the long-term, high-efficiency solution for continuous, reliable power for AI data centers is nuclear energy. The necessity for reliable efficiency is paramount; as Pleydell-Bouverie noted, "You cannot have these clusters go down." This is driving major technology firms to sign large-scale power purchase agreements (PPAs) with nuclear operators. The reliability and carbon-free nature of nuclear power make it uniquely suited to meet the massive, always-on energy requirements of large AI operations. Companies that can deliver high-scale, reliable energy solutions, such as Constellation Energy, are positioned to capture significant value as they become essential partners in the AI infrastructure buildout, underpinning the entire ecosystem’s expansion through and beyond 2026.
