The prevailing obsession in the artificial intelligence industry has been to craft ever-smarter models, pushing the boundaries of computational "IQ." Yet, as Eric Zelikman, founder of humans& and former researcher at Stanford and xAI, articulated in a recent No Priors interview with Sarah Guo, this singular focus might be misdirected. Zelikman champions a profound reorientation: a shift from pure intelligence quotient to cultivating emotional intelligence (EQ) in AI, aiming to build systems that truly work *with* humans, not just for them or, worse, in their place.
Zelikman's journey into AI was initially driven by a desire to liberate human potential. He observed the vast, untapped talent in the world, often constrained by circumstance, and saw AI as a tool to automate mundane tasks, thereby freeing individuals to pursue their passions. "How do you actually build this technology that frees people up to kind of do the things that they are passionate about?" he pondered. This vision, however, evolved with a critical realization: true empowerment demands more than mere automation.
The complexity lies in genuinely understanding human intent. Zelikman recognized that simply automating tasks often meant that "so much of that talent doesn't get used," because the AI didn't grasp the underlying human goals or the nuances of desired outcomes. His early research, particularly on algorithms like Q-STaR, focused on enhancing the *intelligence* of models, enabling them to tackle harder problems through iterative reasoning, even when basic prompting fell short. These advancements, while significant, remained largely within the traditional "IQ" paradigm, striving for models to "answer more smartly" but still confined to a limited, task-centric understanding.
Today's most advanced models, while undeniably capable of solving problems that would challenge even PhD researchers, remain fundamentally "jagged" in their intelligence. They excel when problems are meticulously posed within their training distribution, but struggle with the broader context of human objectives and values. Zelikman highlights a critical flaw in much of the industry's approach: the drive to "get people out of the loop." This mindset, while seemingly efficient for scaling, risks alienating users and stifling true innovation by reducing human agency and overlooking the intricate, often inconsistent, nature of human desires.
The alternative, and the core mission of humans&, is to build AI that deeply understands and *collaborates* with humans. This means developing models that comprehend long-term implications, grasp different people's goals and ambitions, and even identify human weaknesses to coordinate more effectively. Rather than replacing human segments of work, this approach seeks to "grow that pie," fostering a symbiotic relationship where AI augments human capabilities.
Related Reading
- AI's Dual Trajectories: Efficiency and Experience Define the Next Frontier
- Taste as the Ultimate Moat in the Age of Generative AI
- Agentic Commerce: Stripe's Vision for AI-Driven Payments
This human-centric path presents its own set of challenges, particularly in collecting data for complex, multi-turn human-AI interactions that span various timescales and objectives. Current benchmarks, still largely task-centric, fail to capture the richness required for such an endeavor. Models today, Zelikman notes, "don't understand the long-term implications of the things that they do and say." They are often overly responsive to immediate prompts without grasping the broader context of a user's life or aspirations.
Imagine a friend who constantly needs you to re-explain everything about yourself every time you speak. This, Zelikman suggests, is akin to how current AI models interact. Their "memory" is often limited, and their responses are "super sensitive" to input, lacking the nuanced understanding of a long-term relationship. Therefore, for AI to truly empower, its fundamental objective must shift: it needs to be actively *trying to learn about you*, your goals, your values, and how its actions contribute to your long-term well-being. This profound change in objective, moving beyond mere task completion to genuine collaboration and understanding, represents the next frontier for AI to unlock its full, human-aligned potential.

