The rise of large language models (LLMs) has brought unprecedented capabilities but also significant challenges. These colossal models demand immense computational power and energy, leading to soaring inference costs and limiting their deployment to powerful cloud infrastructure. This bottleneck has created an urgent need for solutions that can democratize AI, making it more accessible, affordable, and sustainable.
Enter Multiverse Computing, a Spanish deep-tech startup that has emerged as a frontrunner in addressing this critical industry tension. The company announced a monumental Series B funding round of €189 million (approximately $215 million), propelled by its groundbreaking "CompactifAI" technology. This quantum-computing inspired compression technology promises to radically reshape the economics of AI by drastically reducing the size of LLMs—by up to 95%—without compromising their performance.
