Andrej Karpathy, a prominent voice in the AI community, recently delivered a stark assessment of large language models (LLMs), asserting, "LLMs don't work yet." This provocative stance, amplified through a summary by PJ on Twitter, has sparked considerable discussion among AI professionals, founders, and venture capitalists, probing the fundamental capabilities and limitations of current generative AI. The interview, dissected by the video's commentator, offers a crucial counterpoint to the prevailing hype, though the presenter ultimately remains optimistic about the trajectory of AI development.
Karpathy’s critique centers on several perceived cognitive deficiencies. He contends that LLMs lack sufficient intelligence, are not multimodal enough, cannot effectively use computers, and struggle with memory, failing to retain information previously provided. Such cognitive shortcomings, he suggested, could take "about a decade to work through." This perspective highlights a core tension in AI development: whether intelligence must be intrinsically embedded within the model's architecture or can be augmented through external systems.
A significant insight emerging from this debate is the critical distinction between the raw capabilities of a foundational model and its performance when integrated with sophisticated "scaffolding." The video’s presenter counters Karpathy's points by noting that many of these perceived limitations—such as memory and tool use—are actively being addressed through external mechanisms. While the core model itself may not natively possess these abilities, robust engineering around it, including persistent memory layers and API integrations, allows LLMs to overcome these challenges, often negating the necessity for internal, "cold start" learning with each interaction.
Karpathy further argued that when LLMs are booted up, "they always start from zero," lacking a "distillation phase" akin to human sleep where experiences are analyzed and integrated. Again, the presenter points to external scaffolding, suggesting that these supplementary systems effectively mitigate the need for the core model to update its weights with every new piece of information. The debate here touches on a philosophical question: does true intelligence require an internal, biological-like process of consolidation, or is a functional, externally-supported memory sufficient for practical and even advanced applications?
Another "brutal quote" from Karpathy posits that "What's stored in their weights is only a hazy recollection of the internet. It's just a compressed blur of 15 trillion tokens squeezed into a few billion parameters. Their context window is just short-term working memory." This point resonates with many, including the presenter, who acknowledges that LLMs are, to a degree, "really memorizing what they've seen." This observation underscores the models' reliance on vast datasets for their performance, rather than an innate understanding or genuine reasoning capability. They excel at imitation, but struggle when forced "off the data manifold," as Karpathy states, indicating a limitation in generating truly novel or unexpected outputs without explicit data guidance.
The analogy of the human brain also features prominently in Karpathy's critique: "We've probably recreated a cortical tissue, pattern-learning and general, but we're still missing the rest of the brain. No hippocampus for memory. No amygdala for instincts. No emotions or motivations." This powerful metaphor illustrates the profound gaps that still exist between current AI and human-level general intelligence. It suggests that while LLMs demonstrate impressive pattern recognition, they lack the intricate biological components responsible for deeper cognitive functions, emotional intelligence, and self-motivation, which are integral to human experience and problem-solving.
Related Reading
- Google's AI Balancing Act: Disruption and Infrastructure in the Age of Generative AI
- Build AI Apps for free in Google AI Studio!
- Decoupling AI Agents for Production-Ready Scalability
However, the notion that LLMs "stumble" on anything truly new or ideas without a template is met with skepticism by the presenter. Recent advancements have shown models capable of generating novel scientific hypotheses, co-authoring research papers, and even discovering improved matrix multiplication algorithms. This highlights a crucial counter-insight: the definition of "truly new" in the context of AI is constantly evolving, and what might appear as mere "autocomplete engines" to some are already demonstrating sparks of creativity and innovation that push the boundaries of human-machine collaboration. The very act of combining vast amounts of existing knowledge in unprecedented ways can lead to outcomes previously considered exclusive to human ingenuity.
The ongoing discourse around Karpathy's remarks serves as a vital reality check, urging the AI community to temper exuberance with a rigorous examination of foundational limitations. While the distinction between a model's intrinsic capabilities and its augmented performance via external tools offers a path forward, it also raises questions about the ultimate nature of AI intelligence. The rapid pace of innovation, particularly in areas like code generation and scientific discovery, suggests that even with acknowledged shortcomings, the upside for AI remains immense, continually challenging the pessimistic outlooks on its long-term potential.

