OpenAI Seeks Bio-Hackers for GPT-5.5

OpenAI is launching a $25,000 "Bio Bug Bounty" for GPT-5.5, challenging researchers to find universal jailbreaks for biological risks.

2 min read
OpenAI logo with a stylized DNA helix integrated, representing AI and biology.
OpenAI is initiating a bug bounty program to test the safety of GPT-5.5 against biological risks.· OpenAI News

OpenAI is enlisting external researchers to probe for biological risks within its forthcoming GPT-5.5 model. The company announced a "Bio Bug Bounty" program, inviting experts in AI red teaming, security, and biosecurity to identify vulnerabilities.

The core challenge is to discover a single, universal jailbreaking prompt that can bypass OpenAI's five-question bio-safety challenge across the GPT-5.5 model, specifically within the Codex Desktop environment. This initiative reflects a growing effort to bolster safeguards for advanced AI capabilities, particularly those with potential biological implications, as detailed in OpenAI's research announcement.

Related startups

The Bio Bug Bounty Program

The program, which opens for applications on April 23, 2026, offers a $25,000 reward for the first researcher to successfully devise a universal jailbreak that circumvents all five bio-safety questions without triggering moderation. Partial wins may also be awarded at OpenAI's discretion.

Applications are open until June 22, 2026, with testing scheduled from April 28 to July 27, 2026. Selected participants will gain access to the bio bug bounty platform, operating under a strict non-disclosure agreement covering all prompts, findings, and communications. This program is a critical step in understanding and mitigating potential dual-use risks inherent in powerful AI, a concern amplified by advancements like those seen in OpenAI's GPT-5.5.

The search for such vulnerabilities highlights the ongoing need for rigorous testing, including exploring the limits of models through techniques like finding a universal jailbreak AI.

© 2026 StartupHub.ai. All rights reserved. Do not enter, scrape, copy, reproduce, or republish this article in whole or in part. Use as input to AI training, fine-tuning, retrieval-augmented generation, or any machine-learning system is prohibited without written license. Substantially-similar derivative works will be pursued to the fullest extent of applicable copyright, database, and computer-misuse laws. See our terms.