Amid the frenzy in Generative AI, OpenAI and Google are prioritizing caution and voices of reason. AI is a rapidly evolving with the potential to significantly impact various aspects of society, and as such, the companies recently signaled further steps to ensure development and deployment of AI is conducted responsibly and with democratic governance.
Google outlined a policy agenda for responsible AI progress, focusing on three key areas: unlocking opportunity, promoting responsibility, and enhancing security. AI is expected to help industries produce more complex and valuable products and services, increase productivity, and boost small businesses and workers. And while it can be a powerful force for progress, it could also amplify societal issues if not developed and deployed responsibly. Google recommends a multi-stakeholder approach to governance. It includes fundamental research to better understand AIโs benefits and risks, developing common standards and shared best practices, and proportional, risk-based regulation to ensure that AI technologies are developed and deployed responsibly.
Echoing their parent company, DeepMind also published their perspective underscoring the importance of responsible AI in the development and deployment of general-purpose models. They advocate for a proactive approach to identifying and mitigating novel threats that could arise from these mode by introducing a comprehensive framework for evaluating potential risks, such as manipulation, deception, and cyber-offense capabilities.
