• StartupHub.ai
    StartupHub.aiAI Intelligence
Discover
  • Home
  • Search
  • Trending
  • News
Intelligence
  • Market Analysis
  • Comparison
  • Market Map
Workspace
  • Email Validator
  • Pricing
Company
  • About
  • Editorial
  • Terms
  • Privacy
  1. Home
  2. AI News
  3. Rags Incremental Complexity Beyond Basic Embeddings
Back to News
Ai video

RAG’s Incremental Complexity: Beyond Basic Embeddings

Startuphub.ai Staff
Startuphub.ai Staff
Jul 29, 2025 at 4:37 PM2 min read
RAG’s Incremental Complexity: Beyond Basic Embeddings

The true power of Retrieval Augmented Generation (RAG) in AI applications emerges not from a singular technique, but from a strategic layering of capabilities, each addressing specific query complexities. David Karam, formerly a Product Director at Google Search and now co-founder of Pi Labs, illuminated this nuanced reality at the AI Engineer World's Fair in San Francisco. He spoke about the journey from rudimentary in-memory embeddings to a sophisticated, planet-scale search system handling 160,000 queries per second, demonstrating that robust RAG is built one incremental step at a time.

https://www.youtube.com/watch?v=w9u11ioHGA0

Karam’s presentation served as a practical guide through the evolving landscape of RAG, highlighting the limitations of simpler approaches and the necessity of advanced techniques. He began by illustrating the inherent difficulty of seemingly innocuous queries, such as "falafel," which can imply a recipe, a restaurant, or historical context. This ambiguity underscores why basic relevance ranking often falls short, necessitating a deeper understanding of user intent and data structure.

A critical insight from Karam's discussion centered on common RAG pitfalls, particularly the pervasive yet often counterproductive practice of blind document chunking. He emphasized, "When you're chunking, you're losing context, you're splitting information that should be together." This fragmentation can severely impair retrieval accuracy, as vital information is broken across arbitrary boundaries. Instead, he advocated for more intelligent segmentation based on semantic units or the judicious use of overlapping chunks.

The talk further elaborated on the strategic integration of diverse retrieval methods. While vector embeddings excel at semantic similarity, traditional keyword-based approaches like BM25 still hold significant value. Karam noted, "BM25 is still very good for keyword matching," advocating for hybrid search strategies that leverage both keyword precision and semantic understanding. This duality ensures that both explicit and implicit query intentions are adequately addressed. Subsequent re-ranking with more powerful cross-encoders then refines the initial results, improving relevance before LLM processing.

Ultimately, Karam stressed the importance of pragmatism in RAG system design. Not every complex query demands an infinitely layered solution. Sometimes, the most efficient approach is to recognize the inherent difficulty and "punt it to the LLM or to the UX." This means either allowing the Large Language Model to creatively interpret and generate a response for highly ambiguous queries or designing the user experience to prompt for clarifying information. This philosophy acknowledges that while sophisticated techniques are powerful, resource allocation and user experience considerations should guide development priorities.

#AI
#AI Development
#David Karam
#Embeddings
#Google
#Hybrid Search
#LLM
#Retrieval Augmented Generation

AI Daily Digest

Get the most important AI news daily.

GoogleSequoiaOpenAIa16z
+40k readers