• StartupHub.ai
    StartupHub.aiAI Intelligence
Discover
  • Home
  • Search
  • Trending
  • News
Intelligence
  • Market Analysis
  • Comparison
  • Market Map
Workspace
  • Email Validator
  • Pricing
Company
  • About
  • Editorial
  • Terms
  • Privacy
  • v1.0.0
  1. Home
  2. News
  3. Dr Tulu Deep Research Open Ai Closes Proprietary Gap
Back to News
Ai research

DR Tulu deep research: Open AI closes proprietary gap

S
StartupHub Team
Nov 18, 2025 at 8:18 PM4 min read
DR Tulu deep research: Open AI closes proprietary gap

The landscape of AI-driven deep research just received a significant jolt. Ai2 has unveiled DR Tulu, an open, end-to-end training recipe and model designed to tackle long-form deep research tasks. This release directly challenges the dominance of proprietary systems, offering a robust, cost-effective alternative that demonstrates impressive performance across rigorous industry benchmarks. It marks a pivotal moment for the open-source AI community, providing a comprehensive framework for building agentic systems capable of planning, searching, and synthesizing complex information.

Deep research, by its nature, demands sophisticated agentic capabilities: planning, multi-source information retrieval, and nuanced synthesis to answer intricate questions. While proprietary solutions have shown increasing success in this domain, open alternatives have struggled, largely due to the inherent difficulties in training and evaluating agents for open-ended, long-form tasks. According to the announcement, traditional Reinforcement Learning from Verifiable Rewards (RLVR) falls short when there isn't a single "correct" answer, and static evaluation rubrics fail to capture the dynamic, evolving nature of deep research workflows. This gap has often forced researchers to rely on fixed, hand-crafted pipelines built on closed models, hindering transparency and community-driven innovation.

DR Tulu addresses these core challenges through a novel combination of supervised fine-tuning (SFT) and Reinforcement Learning with Evolving Rubrics (RLER). The SFT stage bootstraps the model with foundational research skills, using high-quality, naturally occurring information-seeking queries and teacher-generated trajectories from advanced models like GPT-5. This initial phase ensures the agent develops a reasonable strategy for planning, tool invocation, and citation before engaging in more complex exploration. The subsequent online RL with RLER then refines these capabilities, allowing the model to adapt and improve its tool use, synthesis quality, and citation behavior in a dynamic, web-enabled environment.

RLER: A Paradigm Shift for Open-Ended Research Training

The true innovation behind DR Tulu deep research lies in its RLER framework. Recognizing the limitations of static rubrics and biased LM-as-judge feedback, RLER introduces an adaptive reward function that evolves along three critical axes. It generates instance-specific, search-grounded criteria for each query, ensuring relevance and up-to-date knowledge. Crucially, RLER employs both positive and negative evolving rubrics, rewarding novel, high-value strategies while explicitly penalizing exploitative behaviors like verbatim copying or irrelevant content padding. This dynamic feedback loop, combined with a filtered rubric buffer and auxiliary citation rewards, allows DR Tulu to continuously learn and refine genuine research quality, moving beyond superficial patterns that often plague other LM-as-judge approaches.

The performance metrics for DR Tulu-8B are compelling. On four long-form benchmarks—ScholarQA-CSv2, HealthBench, ResearchQA, and DeepResearch Bench—DR Tulu-8B (RL) substantially outperforms prior open deep research models, often by significant margins, even against much larger systems. More remarkably, it matches or exceeds several proprietary research agents, including OpenAI Deep Research and Perplexity Deep Research, on certain benchmarks. This parity, achieved with a significantly smaller 8B model, translates to dramatically lower operational costs, making advanced deep research capabilities accessible at a fraction of the price of closed alternatives. The model's ability to produce more compact, focused reports, as observed on ScholarQA-CSv2, further underscores its efficient synthesis capabilities.

Beyond benchmarks, DR Tulu demonstrated its real-world applicability in a clinically grounded case study, GeneticDiseasesQA. Here, it successfully investigated disease-causing gene variants, outperforming several baselines and even some proprietary systems on overall scores and evidence quality. While GPT-5 + OpenAI Search achieved the highest overall score in this specific study, DR Tulu exhibited stronger evidence synthesis, connecting disparate sources into coherent, relationship-level explanations. This indicates a robust generalization capability to complex, domain-specific challenges, highlighting its potential to accelerate scientific discovery in critical fields like medicine.

The decision to release all components of DR Tulu—the full training recipe, code, the DR Tulu-8B checkpoint, the RLER framework, and the dr-agent-lib research library—is a game-changer. This comprehensive open-sourcing fosters reproducibility, transparency, and extensibility, allowing researchers and developers to audit, customize, and build upon the framework. By enabling the integration of domain-specific tools via its Model Context Protocol (MCP)-based design, DR Tulu is poised to democratize advanced deep research, empowering a broader community to push the boundaries of AI-assisted knowledge discovery. This initiative sets a new standard for open AI development in a critical, high-impact domain.

#Agentic AI
#AI
#Ai2
#Deep Research
#Launch
#LLM
#Open-Source AI
#Reinforcement Learning

AI Daily Digest

Get the most important AI news daily.

GoogleSequoiaOpenAIa16z
+40k readers