MiniMax is pushing the boundaries of AI development with its latest model, the MiniMax M2.7. This new iteration is not just an upgrade; it represents an early step towards AI models actively participating in their own evolution, a concept the company calls "early echoes of self-evolution." This approach leverages user feedback to accelerate development cycles.
The M2.7 model is designed to build complex agent harnesses and execute intricate productivity tasks. It utilizes advanced capabilities like Agent Teams, sophisticated Skills, and dynamic tool search. In a significant development, MiniMax allowed M2.7 to update its own memory and construct numerous complex skills for reinforcement learning experiments, then refine its learning process based on those results, initiating a cycle of self-improvement.
Performance and Capabilities
In real-world software engineering, M2.7 shows impressive results. It handles end-to-end project delivery, log analysis, bug troubleshooting, and code security. On the SWE-Pro benchmark, it scored 56.22%, nearing top-tier performance. Its capabilities extend to full project delivery scenarios (VIBE-Pro 55.6%) and understanding complex engineering systems on Terminal Bench 2 (57.0%).
The model also excels in professional office software domains. Its ELO score on GDPval-AA is 1495, the highest among open-source models, showcasing enhanced expertise and task completion. M2.7 demonstrates significant improvements in complex editing tasks across Excel, PPT, and Word, handling multi-round revisions with high fidelity. It maintains a 97% skill adherence rate while working with over 40 complex skills, each exceeding 2,000 tokens.
Furthermore, M2.7 exhibits strong character consistency and emotional intelligence, opening new avenues for product innovation. These advancements are accelerating MiniMax's own transformation into an AI-native organization.
