Michael Kagan, CTO of Nvidia and co-founder of Mellanox, recently engaged in a candid discussion with Sonya Huang and Pat Grady at Sequoia’s Europe100 event, offering profound insights into Nvidia's meteoric rise as the architect of AI infrastructure. His commentary illuminated the pivotal role of the Mellanox acquisition in transforming Nvidia from a mere chip company into a full-stack AI platform powerhouse, capable of scaling computing capabilities far beyond the traditional confines of Moore's Law. This evolution, Kagan posited, is fundamentally driven by an exponential surge in AI workloads and an unprecedented reliance on advanced networking.
Kagan emphasized that the global demand for computing is growing exponentially, at a rate far exceeding the historical doubling every two years predicted by Moore's Law. "AI kicked in when GPU from graphic processing unit became general processing unit," he stated, marking a critical inflection point around 2010-2011. Since then, the performance requirements for AI models have accelerated dramatically, now demanding a 10x to 16x increase in performance annually, or roughly doubling every three months. This relentless pace necessitates innovation that extends beyond merely squeezing more transistors onto a single chip.
