Jared Kaplan, Anthropic's Co-founder and Chief Science Officer, recently illuminated the foundational insights driving artificial intelligence's relentless march towards human-level capabilities. Speaking at AI Startup School in San Francisco, Kaplan, a former theoretical physicist, underscored how the predictable nature of AI scaling is not merely an engineering feat but an almost "physical truth" reshaping our understanding of intelligence itself. His discourse centered on the surprising regularity with which AI performance improves, a phenomenon he contends is as precise as any trend observed in physics or astronomy.
The remarkable progress in AI, Kaplan explained, stems from two core training phases: pre-training and reinforcement learning. Pre-training involves models learning to imitate human-written data and discern underlying correlations, while reinforcement learning optimizes these models based on human feedback, guiding them toward "helpful, honest, and harmless" behaviors. Crucially, Kaplan highlighted that both phases exhibit clear scaling laws, meaning that as compute power, dataset size, and model parameters increase, performance improves in a highly predictable manner.
This predictable scaling unlocks an ever-expanding horizon of AI capabilities. Models are demonstrating increasing flexibility across modalities, from text to code, and even multimodal applications and robotics. Concurrently, the complexity and duration of tasks AI can autonomously complete are doubling roughly every seven months, significantly reducing the human time needed for execution.
Despite this rapid progress, Kaplan identified critical areas still requiring advancement: knowledge, memory, and oversight. AI models need to integrate vast organizational knowledge, effectively retain and utilize past interactions for long-horizon tasks, and develop fine-grained understanding to navigate nuanced problems. These are not merely technical hurdles but represent the subtle complexities of human intelligence that, once mastered, could allow AI to undertake the work of entire human organizations or even scientific communities. Kaplan believes human-AI collaboration, where humans act as "managers" and "sanity checkers," is the most promising path forward for tackling these advanced, "fuzzy" tasks, leveraging AI's capacity for brilliant, albeit sometimes flawed, output.
For founders and AI professionals, Kaplan offered three key recommendations: build products that don't quite work yet, get ready to leverage AI for AI integration, and identify areas where AI adoption could be exponential. He stressed that the rapid improvement of current models means that today's imperfect products could be tomorrow's breakthrough solutions with the next iteration of AI. The ultimate challenge, however, lies in efficiently integrating AI into existing systems, a bottleneck that current AI capabilities are uniquely positioned to address.
