MIT Sloan and BCG Study Highlights Need for Robust Responsible AI Programs and CEO Involvement

2 min read
MIT Sloan and BCG Study Highlights Need for Robust Responsible AI Programs and CEO Involvement

The swift adoption of Generative AI over the past year has transformed the AI landscape, stressing the need for Responsible AI programs to keep pace. Over half (53%) of organizations solely use third-party AI tools without any internal AI, leading to 55% of all AI-related failures, according to recent research by MIT Sloan Management Review (MIT SMR) and Boston Consulting Group (BCG).

The report, "Building Robust RAI Programs as Third-Party AI Tools Proliferate," was based on a survey of 1,240 participants, representing organizations with a minimum of $100 million annual revenues, from 59 industries across 87 countries.

Related startups

The report found that 78% of surveyed organizations heavily rely on third-party AI, exposing them to numerous risks, including reputational damage, loss of customer trust, financial loss, regulatory penalties, compliance issues, and litigation. One-fifth of these organizations do not evaluate these risks. The report encourages using multiple approaches to evaluate third-party tools for risk mitigation. Organizations utilizing seven different methods are more likely to identify lapses than those using only three methods (51% vs. 24%).

The regulatory landscape is rapidly changing with numerous AI-specific regulations. About 51% of organizations surveyed are subject to non-AI-specific regulations affecting their use of AI. These organizations have 13% more Responsible AI Leaders and report fewer AI failures (32% vs. 38%) than those not under similar regulatory pressures.

The report offers five recommendations for organizations facing the rapid adoption of Generative AI and its associated risks: (1) advance Responsible AI programs, (2) effectively evaluate third-party tools, (3) prepare for emerging regulations, (4) involve CEOs in Responsible AI initiatives for maximum success, and (5) increase investment.

"Organizations need to reinforce and invest in a robust RAI program now," said Steven Mills, Chief AI Ethics Officer at BCG and report coauthor. "Even if it feels like the technology is outstripping your Responsible AI program's capabilities, the answer is to bolster your commitment to Responsible AI, not withdraw. Firms must assign leadership and resources to manage risks and deliver business value."

The report comes after another industry survey by the consulting group on the adoption of Generative AI for CMOs. BCG also recently collaborated with Intel to bolster adoption of Generative AI for enterprise uses.

© 2023 StartupHub.ai. All rights reserved. Do not enter, scrape, copy, reproduce, or republish this article in whole or in part. Use as input to AI training, fine-tuning, retrieval-augmented generation, or any machine-learning system is prohibited without written license. Substantially-similar derivative works will be pursued to the fullest extent of applicable copyright, database, and computer-misuse laws. See our terms.