AI in Cybersecurity: Threats and Opportunities

IBM experts discuss the dual role of AI in cybersecurity, from finding vulnerabilities to enabling sophisticated scams, and the rise of insider recruitment.

4 min read
Panel discussion on AI in cybersecurity with experts from IBM
Image credit: Security Intelligence· IBM

In the fast-evolving world of cybersecurity, staying ahead of threats requires constant vigilance and the adoption of new tools. The recent "Security Intelligence" podcast featuring IBM experts Matt Kosinski, Michelle Alvarez, and Dustin Heywood (aka EvilMog) delved into the critical role of AI in both offense and defense, and the emerging challenges it presents.

Meet the Experts

Matt Kosinski, host of the podcast and a key figure at Security Intelligence, guided the conversation, bringing a journalist's perspective to the complex issues discussed. His role involves dissecting industry trends and making them accessible to a broad audience.

Michelle Alvarez, Manager of X-Force Strategic Threat Analysis at IBM, offered insights into the strategic implications of the latest cybersecurity reports. Her work focuses on understanding the broader threat landscape and how organizations can best defend themselves.

The full discussion can be found on IBM's YouTube channel.

Claude Mythos: Marketing hype or the end of cybersecurity? - IBM
Claude Mythos: Marketing hype or the end of cybersecurity? — from IBM

Dustin Heywood, Executive Managing Hacker and Senior Technical Staff Member at IBM Security, provided a deep dive into the technical aspects of cybersecurity. His background as a "hacker" gives him a unique perspective on the methodologies and tools used by both attackers and defenders.

AI's Dual Role: Finding and Exploiting Vulnerabilities

The discussion kicked off by examining the application of AI in identifying software vulnerabilities. Anthropic's Claude, a powerful AI model, is reportedly being used to find bugs that human analysts might miss. This capability, while beneficial for security researchers, also raises concerns about its potential misuse by malicious actors.

Heywood highlighted the potential for AI to be used in a more offensive capacity. He noted that it's easier for attackers to use AI to find vulnerabilities than it is for defenders to patch them. This creates a challenging dynamic where the pace of AI development could outstrip defensive measures.

The FBI's 2025 Internet Crime Report: A Stark Warning

Kosinski introduced the FBI's 2025 Internet Crime Report, painting a grim picture of the current state of cybercrime. The report indicated a significant year-over-year increase in reported incidents and financial losses. A particularly concerning trend identified was the growing use of AI by cybercriminals to enhance their tactics.

Alvarez elaborated on this, explaining how AI is being used to craft more sophisticated phishing emails, create realistic fake audio and video content (deepfakes), and automate the process of finding and exploiting vulnerabilities. This "AI-powered" threat landscape means that even seemingly legitimate communications or requests could be part of a larger, more insidious attack.

Recruiting Insiders: A Growing Threat Vector

A key takeaway from the conversation was the rise of attackers actively recruiting insiders. Heywood pointed out that this is not a new tactic, but AI is making it more efficient and widespread. Attackers can use AI to identify potential targets within organizations based on their online presence and then craft personalized lures.

Alvarez added that the increasing reliance on AI in legitimate business operations also creates new avenues for exploitation. As companies integrate AI into their workflows, the potential for malicious actors to leverage these same tools for their own gain becomes a significant concern.

The Challenge of Open Models and Responsible Disclosure

The discussion touched upon the debate surrounding the release of powerful AI models. While open-sourcing AI can foster innovation and collaboration, it also presents risks if these models fall into the wrong hands. Anthropic's decision to limit access to its most powerful models, like Claude, was discussed as a potential path towards responsible AI development.

Heywood commented on the inherent difficulty in controlling the spread of powerful AI tools once they are made public. He noted that even with safeguards, determined actors will likely find ways to access and utilize these models for malicious purposes. This raises the question of how to balance the benefits of open access with the need for robust security measures.

The Way Forward: Automation and Vigilance

In response to the growing sophistication of AI-driven cyber threats, the experts emphasized the need for organizations to adopt more automated and proactive security measures. Heywood stressed the importance of leveraging AI for defense, using it to detect anomalies and respond to threats more quickly.

Alvarez echoed this sentiment, highlighting that organizations need to be more vigilant about their internal security practices, including employee training and access controls. The trend of attackers recruiting insiders is a stark reminder that the human element remains a critical vulnerability.

Ultimately, the conversation underscored that while AI presents new and significant challenges in cybersecurity, it also offers powerful tools for defense. The key for organizations will be to understand these evolving threats, adapt their strategies accordingly, and foster a culture of security awareness at all levels.

© 2026 StartupHub.ai. All rights reserved. Do not enter, scrape, copy, reproduce, or republish this article in whole or in part. Use as input to AI training, fine-tuning, retrieval-augmented generation, or any machine-learning system is prohibited without written license. Substantially-similar derivative works will be pursued to the fullest extent of applicable copyright, database, and computer-misuse laws. See our terms.