Large Language Models, often lauded for their vast knowledge, are fundamentally misunderstood as intelligent, according to Professor David Krakauer. His provocative thesis posits that true intelligence operates on a "less is more" principle, a stark contrast to the "more is more" paradigm of current AI. This perspective redefines intelligence not as the accumulation of vast data, but as the elegant capacity to simplify complexity, to do more with fewer resources.
Professor David Krakauer, President and William H. Miller Professor of Complex Systems at the Santa Fe Institute, delivered this compelling argument at the Diverse Intelligences Summer Institute 2025. His talk offered a profound re-evaluation of intelligence, drawing distinctions between emergent systems, which are "more is different," and intelligent systems, which are inherently about efficiency and parsimony. This framework directly confronts contemporary AI development, suggesting a critical misdirection in how we conceive of artificial intelligence.
Krakauer asserts that the current crop of large language models, or LLMs, embodies the "more is more" philosophy. He states, quite pointedly, that LLMs are "essentially sophisticated libraries that know everything but understand nothing." He illustrates this with a classroom analogy: "When a student looks up answers in a library during an exam, we call them a cheater, not intelligent." This humorous yet incisive observation highlights his core critique: sheer informational breadth, without the underlying capacity for novel problem-solving or efficient adaptation, does not equate to intelligence.
