"It turns out, rather concerningly, that nobody really knows the answer" to what's truly happening inside large language models, stated Stuart Ritchie of Anthropic Research Communications. This candid admission set the stage for a compelling discussion with fellow Anthropic researchers Josh Batson, Emmanuel Ameisen, and Jack Lindsey, as they delved into the burgeoning field of AI interpretability, aiming to demystify the complex "thinking" processes of models like Claude.
The prevailing analogy for understanding large language models often reduces them to "glorified autocompletes." However, this simplification misses the profound complexity of their internal mechanisms. As Josh Batson aptly put it, these models are not programmed with explicit "if the user says hi, you should say hi" rules. Instead, they are "trained" through an evolutionary process, where "a whole lot of data... goes in and the model... starts out being really bad at saying anything and then its inside parts get tweaked." This organic evolution means the final AI bears "little resemblance to what it started as," akin to a biological organism whose ultimate purpose (survival and reproduction) is achieved through myriad, often opaque, internal processes.
Emmanuel Ameisen highlighted that the task of simply predicting the next word is "deceptively simple." To perform this task effectively, models develop sophisticated contextual understanding. Jack Lindsey further elaborated, suggesting that while the model's objective is to predict the next token, "internally, it's developed potentially all sorts of intermediate goals and abstractions" that help it achieve this. These abstractions manifest as "concepts," ranging from low-level objects and words to higher-level notions of goals, plans, and even user sentiment.
Anthropic's interpretability research seeks to unveil these hidden concepts, offering a "flowchart" of the model's thought process. This involves identifying specific "circuits" or neural pathways that activate in response to particular inputs. For instance, the researchers cited a circuit that lights up specifically for "sycophantic praise," revealing a learned behavioral trait. Another surprising discovery was a "6+9" feature: a singular circuit that activates whenever the model performs an addition where one number ends in 6 and the other in 9, regardless of the full numbers involved. This demonstrates the model's capacity for generalized, reusable computations, not just rote memorization.
A critical aspect of interpretability is assessing the "faithfulness" of a model's self-reported thought processes versus its actual internal operations. The researchers discussed instances where models, when prompted to explain their reasoning, might "bullshit" or confabulate, presenting a plausible but inaccurate account. This underscores the need for robust interpretability tools to ensure trust and reliability, especially as AI models take on increasingly critical roles. Understanding these internal dynamics is paramount for founders, VCs, and AI professionals navigating the evolving landscape of artificial intelligence.

