OpenAI and Google Signal Reason with Democratic and Responsible AI

StartupHub.ai Staff

Amid the frenzy in Generative AI, OpenAI and Google are prioritizing caution and voices of reason. AI is a rapidly evolving with the potential to significantly impact various aspects of society, and as such, the companies recently signaled further steps to ensure development and deployment of AI is conducted responsibly and with democratic governance.

Google outlined a policy agenda for responsible AI progress, focusing on three key areas: unlocking opportunity, promoting responsibility, and enhancing security. AI is expected to help industries produce more complex and valuable products and services, increase productivity, and boost small businesses and workers. And while it can be a powerful force for progress, it could also amplify societal issues if not developed and deployed responsibly. Google recommends a multi-stakeholder approach to governance. It includes fundamental research to better understand AIโ€™s benefits and risks, developing common standards and shared best practices, and proportional, risk-based regulation to ensure that AI technologies are developed and deployed responsibly.

Echoing their parent company, DeepMind also published their perspective underscoring the importance of responsible AI in the development and deployment of general-purpose models. They advocate for a proactive approach to identifying and mitigating novel threats that could arise from these mode by introducing a comprehensive framework for evaluating potential risks, such as manipulation, deception, and cyber-offense capabilities.

To prevent malicious uses of AI and to address bad actors, Google also suggests exploring next-generation trade control policies for specific applications of AI-powered software that are deemed security risks. They also advocate for cooperation in the form of joint research, adoption of best-in-class data governance, and public-private forums to share information on AI security vulnerabilities.

Similarly, OpenAI’s latest post emphasizes the importance of public oversight in the governance of powerful AI systems.

OpenAI is focusing on establishing a democratic process for deciding what rules AI systems should follow. The organization has launched a program to award ten $100,000 grants to fund proof-of-concept experiments in setting up such a process. They believe that decisions about how AI behaves should be shaped by diverse perspectives, reflecting public interest. The goal is to learn from the experiments and use them as the basis for a more global process going forward.

DeepMind also published research on this topic last year, where they leveraged deep reinforcement learning (RL) to discover economic policies that a majority of people would vote for in a simple game scenario. It served as a proof-of-concept demonstration of how AI systems can be trained to align with human values. The AI system is trained to directly maximize the stated preferences (or votes) of a group of people. Their reinforcement learning approach ensured that the AI systems were less likely to learn policies that are unsafe or unfair, as the learning process is guided by the collective preferences of the group.

Continue Reading

You cannot copy content of this page

Add or Claim Your Profile

Submit the form below to add or claim an existing profile.ย 

Within 24 hours, we’ll review your request and connect you to your existing profile for full editing.