"Artificial intelligence may feel like some brand new tech trend, but the truth is, AI has been evolving for over 70 years." This incisive observation from Jeff Crume, a Distinguished Engineer at IBM, sets the stage for a compelling journey through the history of artificial intelligence. In a recent presentation, Crume meticulously traced the winding path of AI, highlighting critical inflection points that transformed it from theoretical constructs and programmed logic into the sophisticated, learning systems we encounter today, and those on the horizon. His commentary offers a valuable perspective for founders, investors, and AI professionals grappling with the rapid pace of innovation.
The genesis of artificial intelligence as a conceptual framework dates back to the 1950s with Alan Turing's groundbreaking work. Turing, often hailed as the father of computer science, proposed the "Turing Test" in 1950 as a benchmark for machine intelligence: if a human interrogator could not distinguish between typed responses from a computer and a human, the machine was deemed intelligent. This foundational idea, alongside the coining of the term "AI" in 1956 and the development of the LISP programming language, marked AI’s theoretical and early practical beginnings. LISP, short for List Processing, relied heavily on recursion, a complex programming paradigm that required extensive manual coding to imbue systems with any semblance of intelligence.
This early era of programmed intelligence, though foundational, was inherently limited. Systems like ELIZA, developed in the 1960s, offered a glimpse into natural language processing (NLP), mimicking a psychotherapist in conversation. While impressive for its time, ELIZA's responses were based on pre-programmed scripts, not genuine understanding. Similarly, the 1970s and 80s saw the rise of Prolog and "expert systems," rule-based AI designed to emulate human decision-making in specific domains. There was significant hype around their potential, but these systems proved brittle. Their intelligence was constrained by the rules they were explicitly programmed with, lacking the adaptability and generalizability that true intelligence requires. If you wanted to make your system smarter, you had to go back in and write more code.
A significant shift occurred in 1997 when IBM's Deep Blue supercomputer defeated reigning world chess champion Garry Kasparov. This was a monumental achievement, shattering the long-held belief that the strategic depth and foresight required for chess grandmastery were exclusive to human intellect. Crume noted, "It had been thought that you can write a computer program that would be able to beat... a very good chess player. But to overcome the intelligence, the expertise, the planning skills, the strategy, the creativity, the just sheer genius of what it would take to be a chess grandmaster, it was thought no computer would ever be able to do that." Deep Blue's victory, achieved through brute-force computation and sophisticated search algorithms, signaled a resurgence in AI research and investment.
The true paradigm shift from programmed intelligence to learning systems, however, began to accelerate in the 2000s with the advent of machine learning (ML) and deep learning (DL). These technologies moved beyond explicit programming, allowing systems to learn patterns and make predictions from vast datasets. Deep learning, in particular, utilized neural networks to simulate human brain structures, enabling unprecedented capabilities in pattern recognition and data analysis. This marked a profound evolution where AI began to teach itself, rather than being explicitly told what to do.
This learning capability culminated in another watershed moment in 2011 when IBM’s Watson competed on the TV game show Jeopardy. Unlike chess, Jeopardy demands a nuanced understanding of natural language, including puns, idioms, and figures of speech, across an incredibly broad range of subjects. Furthermore, it requires rapid, confident responses under pressure. Watson’s triumph over two human champions demonstrated AI’s ability to comprehend context, infer meaning, and synthesize information in ways previously thought impossible for machines. This achievement showcased the power of sophisticated natural language processing combined with immense computational power.
Related Reading
- AI Agents and the Human Brain: Orchestrating the Future of IT
- Agentic Commerce Isn't Ready, But AI Agents Are Everywhere
- Unmasking Rogue AI: The Imperative of Observability for Autonomous Trust
The most recent and perhaps most impactful inflection point, according to Crume, arrived around 2022 with the widespread introduction of generative AI (GenAI). Based on massive "foundation models," GenAI has captivated public imagination by its ability to generate novel text, images, and sounds, and even create convincing "deepfakes." For many, "this is when AI suddenly got real." These systems, highly conversational and seemingly expert across diverse domains, represent a significant leap in AI's perceived capabilities.
Looking ahead, Crume identifies agentic AI as the next frontier. This involves giving AI systems greater autonomy to operate independently, pursue specific goals, and leverage various services to achieve them. Beyond that lies the aspirational goal of Artificial General Intelligence (AGI), where AI would possess human-level intelligence across all cognitive tasks, and eventually Artificial Super Intelligence (ASI), far exceeding human capabilities. The journey has been long, marked by periods of both excitement and disappointment, but the current trajectory suggests an accelerating pace of innovation, driven by AI's newfound capacity for genuine learning.

