d-Matrix recently secured $275 million in new funding. This capital will support its commercialization efforts. The company builds artificial intelligence clusters using its advanced in-memory compute technology. This investment values d-Matrix at $2 billion. Bullhound Capital, Triatomic Capital, and Singapore’s Temasek sovereign wealth fund led the round.
Traditional graphics cards separate processing from memory. Data must travel between these distinct components. This design often creates a bottleneck for AI calculations. d-Matrix developed Corsair, an inference accelerator. It embeds processing components directly into memory. This digital in-memory compute approach boosts speed and power efficiency. Corsair functions as a PCIe card for servers. It contains two custom chips. These chips repurpose SRAM circuits for AI's essential vector-matrix multiplications. The chips also feature a RISC-V control core. SMID cores handle parallel calculations efficiently. Furthermore, Corsair stores AI models in space-saving block floating points. Corsair performs 9,600 trillion calculations per second. This happens when processing data in the MXINT4 block floating point format. Its SquareRack architecture delivers up to ten times the performance of HBM-based chips. d-Matrix also offers JetStream, a network interface card. It links multiple Corsair-equipped servers into AI inference clusters. The Aviator software stack then simplifies AI model deployment and monitoring. The company plans a new accelerator, Raptor, for next year. Raptor will stack RAM directly onto compute modules in a 3D configuration. This design further reduces data travel distance and boosts speed. It will also use advanced four-nanometer manufacturing technology.



