The race for AI dominance has officially left Earth’s atmosphere. Starcloud, the Nvidia-backed startup, recently achieved a historic milestone, successfully training and running a version of Google's Gemini Large Language Model (LLM), dubbed Gemma, in low Earth orbit (LEO). This feat, utilizing an advanced, high-powered Nvidia H100 GPU aboard the Starcloud One satellite, signals the immediate viability of space-based data centers and, critically, offers a potential solution to the terrestrial energy crisis currently fueled by soaring AI compute demands.
Philip Johnston, co-founder and CEO of Starcloud, spoke with CNBC’s Pia Singh about the company’s strategy of shifting massive computational workloads away from Earth. While the immediate use case involves providing low-latency inference and cloud compute services to other spacecraft—minimizing the time it takes to downlink massive datasets—the long-term vision is far more ambitious: relocating almost all high-power compute to space.
This shift is driven by an undeniable economic and environmental reality: the energy density required for training and running next-generation AI models is becoming unsustainable on Earth. Terrestrial data centers demand immense power, often straining local grids and requiring complex, expensive cooling infrastructure. In contrast, LEO offers a truly compelling alternative powered by solar energy. Johnston quantified this immense advantage, noting that Starcloud is aiming for an all-in energy cost that is "10x lower... well below 1 cent per kilowatt-hour, instead of... 5 to 10 cents per kilowatt-hour" seen in new energy projects in North America. This tenfold reduction, even accounting for the significant launch costs, is the fundamental economic lever enabling this extraterrestrial infrastructure.
The scale of this vision is staggering. Johnston detailed that Starcloud’s ultimate goal is to build a 5-gigawatt data center in orbit. Achieving this on Earth would necessitate constructing the equivalent of five nuclear power stations adjacent to one another, a logistical and regulatory nightmare. In space, however, the availability of energy is, as Johnston put it, "almost unlimited." The company projects that within a 10-to-20-year timeframe, they could be launching the equivalent of the entire current US power grid's capacity (approximately 400 gigawatts) into orbit annually. For founders and investors focused on scaling AI infrastructure, the prospect of decoupling compute growth from terrestrial energy constraints represents a paradigm shift in capital deployment and operational efficiency.
The successful launch and activation of the Starcloud One satellite, carrying hardware designed for ground-based data centers, was a moment of immense technical risk and excitement. Johnston admitted that the period immediately following separation was "very exciting and nerve-wracking," particularly given that roughly 50% of first spacecraft fail to establish contact with the ground station. The successful operation, however, validated years of intense engineering focused on hardening commercial hardware for space.
A major technical hurdle for operating high-powered commercial chips like the H100 in LEO is the harsh radiation environment. Starcloud addressed this proactively through rigorous testing. Johnston revealed their deep understanding of the hardware’s vulnerabilities: they subjected the chips to high-velocity particle accelerators at facilities like the cyclotron in Knoxville and Brookhaven National Lab. This testing was essential to inform the design of effective shielding. Johnston asserted that they are now "the only people in the world now that know where an H100 will fail if you blast it with protons at that speed." This level of engineering diligence underscores the seriousness with which Starcloud is tackling the physical constraints of space operations.
Beyond radiation, the issue of orbital debris, often referred to as Kessler Syndrome, remains a critical concern for any company planning massive LEO constellations. Starcloud mitigates this risk by operating their first satellites at a relatively low altitude where atmospheric drag ensures that if a thruster were to fail, the satellite would de-orbit and burn up naturally within a short timeframe. Furthermore, the spacecraft includes redundant thrusters to ensure reliable end-of-life debris mitigation. The successful training run of the LLM in orbit—which generated the memorable first words, "Greetings Earthlings"—validates the operational stability of this pioneering infrastructure.
In reflecting on the journey of leading a deep-tech space startup, Johnston offered a key organizational insight relevant to any high-growth, high-risk venture. His most valuable lesson as CEO has been the imperative "to keep our team as lean and engineering-dense as possible." Starcloud achieved this monumental technical feat with a team of only 12 engineers, proving that focused, high-leverage engineering talent is more critical than sheer scale in navigating the complex intersection of space and advanced computing.
