The persistent threat of social engineering, now supercharged by artificial intelligence, casts a long shadow over corporate cybersecurity. As Jordan Robertson, Bloomberg's cybersecurity reporter, highlighted at Bloomberg Tech in London, a disturbing trend sees sophisticated breaches originating not from complex technical exploits, but from simple phone calls. These attacks, often executed by "kids, young people using the phone to hack these organizations," bypass layers of digital defenses by manipulating call center staff, leading to "extraordinary amount of damage." This stark reality formed the crux of a panel discussion featuring Mary Haigh, Deputy Global Chief Information Officer at BAE Systems PLC, and Tim Erridge, Vice President & Managing Partner, EMEA, Unit 42, Palo Alto Networks, who delved into the evolving landscape of cyber resilience in the age of AI.
Haigh offered a detailed perspective on how a defense giant like BAE Systems fortifies its human-centric vulnerabilities. While their helpdesk operations are often outsourced, the company maintains rigorous oversight, imposing stringent clearance requirements and nationality checks on staff due to the sensitive nature of their work. A crucial component of their strategy is robust security education and awareness training. This program focuses on "the human angle of what are the human vulnerabilities," aiming to inform employees about diverse attacker methodologies rather than assigning blame. Staff are encouraged to "just pause if it doesn't feel right."
Beyond training, BAE Systems employs a multi-layered process for sensitive actions like password resets. It’s not merely a series of security questions; if a caller cannot provide all required information, the request is escalated to a "trusted person"—typically a manager—who must then personally verify the identity. This "very, very high level of validation" underscores the lengths to which organizations must go to protect against social engineering. Haigh acknowledged the constant evolution of these threats, stating, "the bar is moving all the time, because the ways that you can impersonate are moving."
Tim Erridge echoed the sentiment regarding the escalating sophistication of attackers. His team at Unit 42 frequently responds to breaches where the initial access was gained through social engineering or the exploitation of a known software vulnerability. He revealed that "over 70% of our incidents worked, it was either social engineering or it was an exploit of a software vulnerability that's exposed externally." Approximately one-third of these cases stemmed from social engineering. Erridge emphasized that these are "fixable problems," not insurmountable futuristic attacks. However, the adversary's capabilities are "ever evolving," as they readily adopt AI for nefarious purposes without the ethical or legal constraints faced by legitimate organizations.
The weaponization of AI by attackers manifests in various forms. Erridge described the use of deepfakes, false faces, and voice masking to create convincing impersonations. Attackers leverage AI to harvest "a highly comprehensive dossier" on potential victims, enabling them to craft highly tailored and persistent social engineering attempts. The sheer doggedness of an AI-powered attacker, capable of relentless engagement over extended periods, poses a unique challenge. Humans, unlike AI, eventually succumb to pressure and fatigue.
Palo Alto Networks is actively exploring defensive applications of AI, including a fascinating "dueling chatbot" project. This initiative pits an adversarial AI chatbot, trained on real-world social engineering tactics (like those used by groups such as "Scattered Spider"), against a defender chatbot, designed to embody best practices of a call center engineer. The aim is to identify and strengthen weak points in defensive protocols before real-world attacks exploit them. Erridge noted that AI's persistence, a major advantage for attackers, could also be leveraged by defender chatbots to relentlessly challenge incoming requests.
Related Reading
- AI's Autonomous Frontier Demands a Security Paradigm Shift
- Building Trust in AI: The Pillars of Explainability, Accountability, and Data Transparency
- AI's Dual Nature: Creature or Machine? The Battle Over Regulation
Beyond direct defense, Haigh highlighted BAE Systems' internal use of generative AI for knowledge transfer. In niche product areas with highly skilled, long-serving employees approaching retirement, there's a risk of losing invaluable tacit knowledge. By interviewing and recording these experts, BAE Systems can apply generative AI to the audio, transforming intangible knowledge into structured, searchable training material for younger apprentices. This "closed-ended research" on trusted, verified data, with a human-in-the-loop, offers significant efficiency gains in both task execution and training.
The rapid acceleration of cyber threats demands an equally swift defensive response. The timeline from a vulnerability's public disclosure to its active exploitation has shrunk to mere hours. This compressed window means organizations must be faster and more agile in their defense strategies. The insights from Haigh and Erridge underscore that while AI presents unprecedented challenges to cybersecurity, it also offers powerful tools for resilience, provided they are developed and deployed with careful consideration and continuous adaptation.

