• StartupHub.ai
    StartupHub.aiAI Intelligence
Discover
  • Home
  • Search
  • Trending
  • News
Intelligence
  • Market Analysis
  • Comparison
  • Market Map
Workspace
  • Email Validator
  • Pricing
Company
  • About
  • Editorial
  • Terms
  • Privacy
  • v1.0.0
  1. Home
  2. News
  3. Mit Sloan And Bcg Study Highlights Need For Robust Responsible Ai Programs And Ceo Involvement
Back to News
Startup news

MIT Sloan and BCG Study Highlights Need for Robust Responsible AI Programs and CEO Involvement

S
StartupHub Team
Jun 22, 2023 at 9:56 AM2 min read
MIT Sloan and BCG Study Highlights Need for Robust Responsible AI Programs and CEO Involvement

The swift adoption of Generative AI over the past year has transformed the AI landscape, stressing the need for Responsible AI programs to keep pace. Over half (53%) of organizations solely use third-party AI tools without any internal AI, leading to 55% of all AI-related failures, according to recent research by MIT Sloan Management Review (MIT SMR) and Boston Consulting Group (BCG).

The report, "Building Robust RAI Programs as Third-Party AI Tools Proliferate," was based on a survey of 1,240 participants, representing organizations with a minimum of $100 million annual revenues, from 59 industries across 87 countries.

The report found that 78% of surveyed organizations heavily rely on third-party AI, exposing them to numerous risks, including reputational damage, loss of customer trust, financial loss, regulatory penalties, compliance issues, and litigation. One-fifth of these organizations do not evaluate these risks. The report encourages using multiple approaches to evaluate third-party tools for risk mitigation. Organizations utilizing seven different methods are more likely to identify lapses than those using only three methods (51% vs. 24%).

The regulatory landscape is rapidly changing with numerous AI-specific regulations. About 51% of organizations surveyed are subject to non-AI-specific regulations affecting their use of AI. These organizations have 13% more Responsible AI Leaders and report fewer AI failures (32% vs. 38%) than those not under similar regulatory pressures.

The report offers five recommendations for organizations facing the rapid adoption of Generative AI and its associated risks: (1) advance Responsible AI programs, (2) effectively evaluate third-party tools, (3) prepare for emerging regulations, (4) involve CEOs in Responsible AI initiatives for maximum success, and (5) increase investment.

"Organizations need to reinforce and invest in a robust RAI program now," said Steven Mills, Chief AI Ethics Officer at BCG and report coauthor. "Even if it feels like the technology is outstripping your Responsible AI program's capabilities, the answer is to bolster your commitment to Responsible AI, not withdraw. Firms must assign leadership and resources to manage risks and deliver business value."

The report comes after another industry survey by the consulting group on the adoption of Generative AI for CMOs. BCG also recently collaborated with Intel to bolster adoption of Generative AI for enterprise uses.

#AI
#Enterprise AI
#Generative AI
#Responsible AI

AI Daily Digest

Get the most important AI news daily.

GoogleSequoiaOpenAIa16z
+40k readers