Scaling Agent Collaboration via Recursion

RecursiveMAS scales agent collaboration via a unified latent-space recursive computation, achieving significant accuracy gains with improved efficiency.

Diagram illustrating the RecursiveMAS framework connecting heterogeneous agents in a recursive loop.
The RecursiveMAS framework enables scalable agent collaboration through recursive computation.

The quest for deeper reasoning and more complex problem-solving in AI has led to recursive or looped language models. Now, this principle is being extended to the realm of multi-agent systems, questioning whether agent collaboration itself can be scaled through recursion. This advancement promises to unlock new paradigms in distributed AI.

RecursiveMAS: Unifying Agents in a Latent-Space Loop

Introducing RecursiveMAS, a novel framework that recasts multi-agent systems as a unified, latent-space recursive computation. Unlike traditional approaches, RecursiveMAS connects heterogeneous agents via a lightweight RecursiveLink module, facilitating in-distribution latent thought generation and cross-agent latent state transfer. This recursive approach allows the entire system to iteratively refine its collective output, mirroring the deepening reasoning seen in single-model recursive architectures.

Related startups

Co-Optimization for Systemic Intelligence

To effectively train such a recursively connected system, the researchers developed an inner-outer loop learning algorithm. This approach enables iterative, whole-system co-optimization through shared gradient-based credit assignment across recursion rounds. Theoretical analyses confirm that RecursiveMAS is more efficient than standard text-based Multi-Agent Systems (MAS) and maintains stable gradients during its recursive training process, addressing key challenges in training complex, interconnected AI.

Empirical Validation Across Diverse Domains

The efficacy of RecursiveMAS was demonstrated across 9 benchmarks spanning mathematics, science, medicine, search, and code generation, instantiated under 4 representative agent collaboration patterns. Compared to advanced single-agent, multi-agent, and recursive computation baselines, RecursiveMAS consistently delivered an average accuracy improvement of 8.3%. Furthermore, it achieved significant efficiency gains, with 1.2$ imes$-2.4$ imes$ end-to-end inference speedups and a remarkable 34.6%-75.6% reduction in token usage. This highlights the practical viability and performance advantages of the RecursiveMAS framework.

© 2026 StartupHub.ai. All rights reserved. Do not enter, scrape, copy, reproduce, or republish this article in whole or in part. Use as input to AI training, fine-tuning, retrieval-augmented generation, or any machine-learning system is prohibited without written license. Substantially-similar derivative works will be pursued to the fullest extent of applicable copyright, database, and computer-misuse laws. See our terms.