NVIDIA and Mistral AI have partnered to launch the Mistral 3 family of open-source models, optimized for deployment across NVIDIA's supercomputing and edge platforms. This collaboration introduces a new generation of multilingual, multimodal AI designed for enterprise applications, promising significant advancements in efficiency and accessibility. The announcement signals a strategic move to democratize high-performance AI, making frontier-class models practical for real-world use cases.
At the core of this release is Mistral Large 3, a sophisticated mixture-of-experts (MoE) model. With 41 billion active parameters and a vast 675 billion total, it delivers efficiency by activating only relevant model parts, ensuring accuracy without wasteful computation. According to the announcement, this architecture, combined with NVIDIA GB200 NVL72 systems, achieves a remarkable 10x performance gain over the prior-generation H200, translating directly to lower costs and improved user experience for enterprise AI workloads.
