The rapid advancement of artificial intelligence presents both unprecedented opportunities and complex challenges, a dichotomy sharply explored by IBM's Martin Keen and Jeff Crume, expertly guided by Graeme Noseworthy in a recent discussion. Their insights cut through the hype, offering a grounded perspective on the nuances of AI development and the persistent realities of cybersecurity.
Martin Keen, an IBM Master Inventor, illuminated the critical distinction between generative AI and its more autonomous successor, agentic AI. While generative AI excels at producing novel content—be it text, code, images, or music—in response to explicit prompts, agentic AI marks a profound leap. It is designed to achieve specific goals, planning and executing multi-step actions without constant human intervention. "It can trigger its own next steps, it can adapt to changing context and keep going until it finally meets that goal," Keen explained, highlighting its potential for complex tasks like autonomous incident response or adaptive robotic process automation.
This shift from reactive content generation to proactive, goal-oriented autonomy fundamentally alters how we conceive of AI's role. It moves AI from a tool for creation to a partner in execution, demanding a new level of trust and sophisticated oversight.
The conversation pivoted to cybersecurity with Jeff Crume, an IBM Distinguished Engineer, addressing a common misconception about the "Dark Web." Crume clarified that "dark" refers not to illicit content, but to its unindexed, hidden nature, making it difficult to find. He underscored the impracticality of blocking it, citing three primary reasons: its minuscule size (less than 2% of the internet), jurisdictional complexities in a global digital landscape, and its surprising utility in certain contexts, such as enabling free speech in oppressive regimes or monitoring hacker activities for research. Crume aptly described the effort to block it as "a bit of a game of whack-a-mole."
Another critical area of AI, the phenomenon of "hallucinations," was tackled by Keen. He clarified that AI isn't "lying" with intent but rather "confidently stating false information as though it were a fact." This stems from large language models (LLMs) being prediction machines, optimized for fluency and cohesion rather than absolute factual accuracy. They are trained to predict the most statistically likely next "token" in a sequence, often filling knowledge gaps with plausible-sounding but incorrect information, especially concerning recent or niche topics. Mitigation strategies, such as Retrieval Augmented Generation (RAG), which pulls contextual information from external databases, are crucial, but Keen stressed the ongoing need for "human in the loop validation" to verify outputs.
Looking ahead, the experts touched upon AI's impact on employment, suggesting that AI will primarily transform jobs rather than outright replace them, much like ATMs redefined banking roles without eliminating them. Jobs characterized by rule-based, high-documentation/low-judgment, or low-context tasks are most susceptible to automation. Conversely, roles demanding creativity, empathy, complex reasoning, and physical dexterity remain firmly within the human domain. This necessitates continuous upskilling to leverage AI as an augmentation tool, enabling humans to focus on higher-value, uniquely human endeavors.

