Amid the frenzy in Generative AI, OpenAI and Google are prioritizing caution and voices of reason. AI is a rapidly evolving with the potential to significantly impact various aspects of society, and as such, the companies recently signaled further steps to ensure development and deployment of AI is conducted responsibly and with democratic governance.
Google outlined a policy agenda for responsible AI progress, focusing on three key areas: unlocking opportunity, promoting responsibility, and enhancing security. AI is expected to help industries produce more complex and valuable products and services, increase productivity, and boost small businesses and workers. And while it can be a powerful force for progress, it could also amplify societal issues if not developed and deployed responsibly. Google recommends a multi-stakeholder approach to governance. It includes fundamental research to better understand AIโs benefits and risks, developing common standards and shared best practices, and proportional, risk-based regulation to ensure that AI technologies are developed and deployed responsibly.
Echoing their parent company, DeepMind also published their perspective underscoring the importance of responsible AI in the development and deployment of general-purpose models. They advocate for a proactive approach to identifying and mitigating novel threats that could arise from these mode by introducing a comprehensive framework for evaluating potential risks, such as manipulation, deception, and cyber-offense capabilities.
With more powerful AI systems comes more responsibility to identify novel capabilities in models. ๐
— Google DeepMind (@DeepMind) May 25, 2023
Our new research looks at evaluating future ๐ฆ๐น๐ต๐ณ๐ฆ๐ฎ๐ฆ risks, which may cause harm through misuse or misalignment.
Hereโs a snapshot of the work. ๐งต https://t.co/Y499hpV4no
To prevent malicious uses of AI and to address bad actors, Google also suggests exploring next-generation trade control policies for specific applications of AI-powered software that are deemed security risks. They also advocate for cooperation in the form of joint research, adoption of best-in-class data governance, and public-private forums to share information on AI security vulnerabilities.
We're launching ten $100,000 grants for building prototypes of a democratic process for steering AI. Our goal is to fund experimentation with methods for gathering nuanced feedback from everyone on how AI should behave. Apply by June 24, 2023: https://t.co/kJG2bNnons
— OpenAI (@OpenAI) May 25, 2023
Similarly, OpenAI’s latest post emphasizes the importance of public oversight in the governance of powerful AI systems.
OpenAI is focusing on establishing a democratic process for deciding what rules AI systems should follow. The organization has launched a program to award ten $100,000 grants to fund proof-of-concept experiments in setting up such a process. They believe that decisions about how AI behaves should be shaped by diverse perspectives, reflecting public interest. The goal is to learn from the experiments and use them as the basis for a more global process going forward.
DeepMind also published research on this topic last year, where they leveraged deep reinforcement learning (RL) to discover economic policies that a majority of people would vote for in a simple game scenario. It served as a proof-of-concept demonstration of how AI systems can be trained to align with human values. The AI system is trained to directly maximize the stated preferences (or votes) of a group of people. Their reinforcement learning approach ensured that the AI systems were less likely to learn policies that are unsafe or unfair, as the learning process is guided by the collective preferences of the group.