OpenAI is enlisting external researchers to probe for biological risks within its forthcoming GPT-5.5 model. The company announced a "Bio Bug Bounty" program, inviting experts in AI red teaming, security, and biosecurity to identify vulnerabilities.
The core challenge is to discover a single, universal jailbreaking prompt that can bypass OpenAI's five-question bio-safety challenge across the GPT-5.5 model, specifically within the Codex Desktop environment. This initiative reflects a growing effort to bolster safeguards for advanced AI capabilities, particularly those with potential biological implications, as detailed in OpenAI's research announcement.