AI Video

The Spherical Cow Problem: Why AI’s Success May Be a Philosophical Illusion

Jan 18 at 6:57 AM5 min read
The Spherical Cow Problem: Why AI’s Success May Be a Philosophical Illusion

“If you don’t get the second question—‘Why are things NOT that way?’—you’ve done NOTHING.” This stark assessment from Professor Noam Chomsky cuts to the heart of the latest philosophical debate rocking the AI and neuroscience communities: the question of scientific simplification. A recent special edition episode of Machine Learning Street Talk (MLST), “The Simplification of Reality in Science,” brought together luminaries like Chomsky, Karl Friston, Mazviita Chirimuuta, Francois Chollet, and John Jumper to dissect how models—from physics’ infamous “spherical cow” joke to deep learning’s vast neural networks—shape, and potentially distort, our understanding of reality itself. The central tension explored is whether the utility of a model implies its truth, a critical question for founders and analysts betting on the inevitability of Artificial General Intelligence (AGI).

The episode opens with a tribute to Professor Karl Friston, a highly cited neuroscientist who developed the Free Energy Principle (FEP). Friston’s concept attempts to explain all behavior—perception, action, and learning—with a single mathematical quantity, effectively creating a grand unified theory of the brain. Friston himself admitted that the FEP is “almost tautologically simple,” echoing the famous physics joke about assuming a spherical cow in a vacuum to make calculations tractable. This raises the “Spherical Cow Problem”: when does a necessary simplification become a dangerous illusion that we mistake for the real thing?

This is the core concern of Professor Mazviita Chirimuuta, author of The Brain Abstracted: Simplification in the History and Philosophy of Neuroscience. Chirimuuta, who teaches at Edinburgh University, argued that the pursuit of simple, universal underlying principles—a hallmark of science since Galileo and Newton—often blinds us to the messy reality we are trying to model. She framed the debate as a boxing match between two philosophical attitudes: Simplicius, who believes the universe is fundamentally simple and ordered, and Ignorantio, who holds that our successful models are merely useful fictions because we are too cognitively limited to grasp the true complexity of reality. Chirimuuta aligns with Ignorantio, suggesting that successful science merely confirms our skill at building useful simplifications, not that nature itself is simple. She noted that when scientists abandon the goal of pure curiosity and pursue applied science—engineering systems to achieve desired outcomes—the oversimplification problem is less critical, provided the tool works.

The contemporary manifestation of this philosophical wager is the belief that the mind is software running on biological hardware. This metaphor, which followed earlier mechanistic analogies like hydraulic pumps and telegraph networks, has hardened into a widely accepted ontological claim, especially in Silicon Valley. Francois Chollet, a deep learning researcher and influential voice in the field, proposed the “Kaleidoscope Hypothesis,” arguing that beneath the apparent chaos of reality lies simple, repeating patterns—like the colored bits of glass in a kaleidoscope that create infinite complexity. For Chollet, intelligence is the process of mining experience to extract these “unique atoms of meaning,” or abstractions, which are then repeated and transformed. Joscha Bach took this even further, provocatively claiming that software is literally spirit, not metaphorically, because it represents a causal pattern that transcends its physical substrate.

Chirimuuta and other philosophers strongly pushed back on this metaphysical promotion. They argued that the “sameness” Bach sees across different chips running the same program is something we impose; it exists in our description, not necessarily in the physical reality of the different voltages and electrons. The computational model may be useful, but mistaking its elegance for the structure of reality is the “fallacy of misplaced concreteness.” Furthermore, Chirimuuta posited that the prevailing confidence in AGI's inevitability—particularly among tech insiders—might be a "cultural historical illusion," rooted in our inherited mechanistic assumptions about the mind.

This critique leads to a crucial distinction articulated by Nobel laureate Dr. John Jumper, the lead developer of AlphaFold, who differentiated between prediction, control, and understanding. Jumper argued that current AI systems excel at prediction (forecasting future states) and control (manipulating outcomes), but they do not achieve human-level understanding. Understanding, in his view, requires a human in the loop, a minimal collection of facts that can be communicated to another human in a compact, fixed form—a theory that "fits on an index card." When we accept black-box tools that work without understanding the underlying mechanisms, we risk being blindsided when those tools inevitably fail.

Professor Luciano Floridi offered a framework for navigating this complexity, emphasizing the difference between metaphysics (the nature of reality itself) and ontology (how we structure the world based on our current perspectives and tools). Floridi suggested that the digital revolution has changed the ontology of the world around us, leading us to interpret human beings as “informational organisms.” This reontologizing is useful for developing technologies like AI, but it is not a metaphysical truth. The mistake, he warns, is asking an absolute question—"Is the universe a computational giant? Yes or no?”—which he deemed meaningless because the answer depends entirely on the purpose and context of the inquiry. In the end, the simplification of reality is not a flaw in science, but an inherent necessity given our cognitive and temporal limitations. The danger lies in forgetting that our models are maps, not the territory itself, and that the utility of a model is distinct from its ultimate truth.