"Getting ready this year has been extremely difficult," remarked Patrick Gormley, Senior Vice President, Data Science & AI Consulting Lead at Kyndryl Consult, at Bloomberg Tech in London. He spoke with Bloomberg's Amy Thomson, joined by Daniel Hulme, Chief AI Officer at WPP, and Paul O’Sullivan, Senior Vice President & Chief Technology Officer at Salesforce UKI, about the accelerating pace of AI innovation and the ensuing challenges for enterprise. The conversation centered on AI readiness, emphasizing that the question is no longer *if* to deploy AI, but *how* to do so effectively and responsibly.
The "insane pace of technology innovation" is a core insight that reverberated through the discussion. Gormley illustrated this with striking examples: the ability to orchestrate AI agents jumped from six in January to "infinite" by early summer. He further highlighted Anthropic’s Claude Sonnet 4.5 model, released just weeks prior, which can autonomously code for 30 hours, a significant leap from its predecessor’s 7-hour capability just two months before. This rapid advancement means that "best-in-class technology today can be obsolete in 12 weeks," a phenomenon unprecedented in business history, demanding a constant re-evaluation of strategies and investments.
While the speed of technological change is undeniable, Paul O’Sullivan of Salesforce underscored that the journey to AI value extends beyond mere technical adoption. "Every C-level that we talk to…they all know that AI is going to play a crucial role in their strategy moving forward," he noted. However, the critical challenge now is identifying *where* to extract tangible value. The initial wave saw a "proliferation of proof of concepts and pilots" following the launch of generative AI tools like ChatGPT in late 2022. Yet, many organizations are still grappling with how to translate these experiments into real business impact. O’Sullivan stressed the need for a robust ecosystem around LLMs, integrating trust, security, governance, and contextual embedding to ensure these powerful models understand and serve specific business objectives.
Another critical insight that emerged was the paramount importance of human capital and organizational change management. Daniel Hulme, Chief AI Officer at WPP, emphasized that AI projects "don't fail for AI reasons; they fail for the same reasons why software fails," often due to inadequate investment, poor maintenance, or a lack of strategic alignment. He outlined three pillars for successful AI deployment: data, AI talent, and leadership. WPP, for instance, is investing £300 million annually in its WPP Open platform, an end-to-end marketing platform that not only leverages AI but also addresses governance, security, and safety. This holistic approach empowers 70,000 employees to utilize AI tools effectively.
The rapid evolution of AI also necessitates a continuous learning and reskilling imperative for the workforce. Gormley pointed out a "billion gap of skills in the global market" according to an IBM report, highlighting the urgent need for AI engineers and designers. However, it's not just about acquiring new technical skills; it's about adapting to constant change. O’Sullivan echoed this, emphasizing the need for foundational learning in areas like security and prompt engineering. Salesforce offers learning modules on its free Trailhead platform to upskill individuals, recognizing that the future workforce will involve a collaborative augmentation of human and AI agents. This shift requires understanding new roles and responsibilities rather than fearing job displacement.
The discussion concluded by addressing the crucial aspects of governance and risk. Gormley stressed that most risks are "caused by humans" rather than the technology itself, citing an instance where an AI-generated client deliverable was inaccurate due to a lack of process. Every AI agent represents a potential vulnerability requiring careful management. The panel agreed that the focus should be on "humans in control" rather than fully autonomous AI. Paul O’Sullivan underscored the need for observability across agent execution and clear accountability. He emphasized applying established architectural design principles like "security by design" to AI deployments, ensuring that every hand-off between agents, and the information they process, is auditable and compliant. Daniel Hulme added that companies must not only consider what happens if AI goes wrong but also what happens if it goes "very right," as over-optimization in one area could inadvertently cause harm elsewhere in the supply chain. The key is a holistic understanding of AI's impact and a strategic, well-governed approach to its integration.

