OpenAI is rolling out a new set of prompt-based safety policies aimed at helping developers build more age-appropriate AI experiences for teenagers. These policies are engineered to integrate with the company’s open-weight safety model, gpt-oss-safeguard, simplifying the process of turning complex safety requirements into actionable classifiers for real-world applications.
The move underscores OpenAI's commitment to balancing innovation with responsible deployment, particularly for younger users. The company believes that providing developers with capable models and robust safety tools is crucial for fostering a safer AI ecosystem. These new policies are a direct extension of OpenAI's broader efforts, including updates to its Model Spec with Under-18 principles and product-level safeguards like parental controls.