Large Language Models, often lauded for their vast knowledge, are fundamentally misunderstood as intelligent, according to Professor David Krakauer. His provocative thesis posits that true intelligence operates on a "less is more" principle, a stark contrast to the "more is more" paradigm of current AI. This perspective redefines intelligence not as the accumulation of vast data, but as the elegant capacity to simplify complexity, to do more with fewer resources.
Professor David Krakauer, President and William H. Miller Professor of Complex Systems at the Santa Fe Institute, delivered this compelling argument at the Diverse Intelligences Summer Institute 2025. His talk offered a profound re-evaluation of intelligence, drawing distinctions between emergent systems, which are "more is different," and intelligent systems, which are inherently about efficiency and parsimony. This framework directly confronts contemporary AI development, suggesting a critical misdirection in how we conceive of artificial intelligence.
Krakauer asserts that the current crop of large language models, or LLMs, embodies the "more is more" philosophy. He states, quite pointedly, that LLMs are "essentially sophisticated libraries that know everything but understand nothing." He illustrates this with a classroom analogy: "When a student looks up answers in a library during an exam, we call them a cheater, not intelligent." This humorous yet incisive observation highlights his core critique: sheer informational breadth, without the underlying capacity for novel problem-solving or efficient adaptation, does not equate to intelligence.
Indeed, Krakauer argues that intelligence is not merely knowledge. This distinction is crucial for understanding his broader theory. He elaborates on Herbert Woodrown’s definition, suggesting that intelligence is "the capacity to acquire capacity." This shifts the focus from static knowledge to dynamic potential, emphasizing an organism's ability to learn and adapt over time.
His definition extends intelligence far beyond human cognition, encompassing all living things. From bacteria to elephants, every organism, by virtue of its evolutionary history, exhibits intelligence. It is the information accumulated through natural selection, a coded history of adaptive states. This accumulated capacity allows organisms to navigate and thrive in their environments, showcasing a fundamental form of intelligence.
Krakauer further dissects intelligence into three dimensions: strategic, inferential, and representational. Strategic intelligence, he notes, is about adaptation and survival, an area where viruses, with their minimal genetic code, often "beat us" due to their rapid evolutionary prowess. Inferential intelligence involves computation and logic, a realm where humans, ironically, built calculators and computers precisely because we are inherently "so bad at it." The most uniquely human form, however, is representational intelligence, the ability to find better ways to encode and frame problems.
The Soma Cube puzzle serves as a powerful demonstration of representational intelligence. This combinatorial problem, which mathematically involves 15,000 logical clauses, can be solved by a four-year-old child not through complex computation, but by manipulating physical blocks. The physical world, through its inherent constraints and properties, does the computing for us. This is the essence of intelligence: "making impossibly hard problems tractable through clever representation." It’s about externalizing the computational burden onto the environment.
The purpose of science, Krakauer contends, is not to control or exploit the universe, but to make it intelligible to us. This humanistic endeavor seeks underlying rules rather than mere effective theories. For instance, Ptolemy's complex, multi-epicycle model of the solar system, though ingenious, was a "more is more" approach. Newton's inverse square law, however, was "less is more"—a compact, elegant description of the same regularity, revealing a deeper rule. This pursuit of parsimonious, explanatory rules is a hallmark of scientific intelligence.
Related Reading
- AI and Layoffs: More Hype Than Reality, For Now
- Seeing The Future from AI Companions to Personal Software
- Sora 2: The Genesis of World Models and a New Creative Economy
Life itself, Krakauer proposes, is an intensive property of adaptive matter—an outcome of selection. Intelligences, conversely, are extensive properties, representing the coded history of adaptive states. This means that while life is about the *state* of being adaptive, intelligence is about the *record* and *accumulation* of successful adaptations. It’s why we say an elephant is "smarter" than a worm, but never "more alive"—the difference lies in their accumulated capacity for intelligent adaptation.
Ultimately, Professor Krakauer’s lecture serves as a vital call to reconsider our definitions of intelligence, especially in the age of advanced AI. He argues that true intelligence is not about consuming and regurgitating vast datasets, but about the elegant ability to simplify, to adapt, and to find ingenious representations that transform complex problems into tractable ones. It is about "making hard problems easy." This fundamental distinction between raw data and genuine insight offers a critical lens for founders, investors, and AI professionals seeking to build truly intelligent systems.

