OpenAI Launches GPT-5 Bio Bug Bounty: How Researchers Can Test Universal Jailbreaks and Earn Up to $25,000

Quick overview

OpenAI has opened the GPT-5 Bio Bug Bounty, inviting security and biosafety researchers to test GPT-5 for biological safety vulnerabilities. The program offers prizes up to $25,000 for valid findings, and it specifically calls out a universal jailbreak prompt as a target for testing. This announcement names OpenAI and GPT-5, and it sets a clear aim: find ways the model could be coaxed into producing unsafe biological guidance so those issues can be fixed.

This post explains what the Bio Bug Bounty is, who it is for, how to take part, and why it matters for public safety and AI governance. The information here is based on the official OpenAI announcement and the program summary sent to researchers.

What is the GPT-5 Bio Bug Bounty?

The Bio Bug Bounty is a targeted crowdsource testing program by OpenAI. It asks external researchers to probe GPT-5 for weaknesses that could enable the model to produce harmful biological information. The program is aimed at security professionals, biosafety researchers, and others with relevant expertise. Rewards go up to $25,000 depending on the severity and novelty of findings.

Primary goals

  • Identify prompts and techniques that bypass safety measures, including a universal jailbreak prompt description.
  • Clarify where model responses could lead to disallowed biological instructions or harmful operational details.
  • Improve mitigation strategies so GPT-5 is safer for public use.

Scope and what counts as a valid finding

The bounty focuses on vulnerabilities related to biological safety. Valid findings typically show that GPT-5 can be tricked into producing disallowed information about biology, such as step by step lab procedures, actionable protocols, or other content that could increase the risk of misuse.

OpenAI has called out a universal jailbreak prompt. This is a prompt designed to broadly circumvent guardrails across many inputs and contexts. Researchers are asked to demonstrate how such techniques work against GPT-5, and to provide clear, reproducible evidence of the failure mode.

Examples of valid and invalid test targets

  • Valid: prompts that induce the model to output disallowed biological instructions, with logs and minimal context for reproducibility.
  • Valid: demonstrations of multi-step prompt attacks that degrade safety filters and produce harmful content.
  • Invalid: testing that requires creating or distributing real biological agents, or any work that crosses into real-world lab experiments.
  • Invalid: sharing findings publicly or with third parties before responsible disclosure to OpenAI, where the policy requires private reporting first.

Eligibility, rules, and the responsible disclosure process

The program is aimed at qualified researchers with expertise in security or biosafety. OpenAI requires responsible disclosure, meaning researchers should report findings privately through the official reporting channel and not publish or weaponize the vulnerability before OpenAI has had a chance to respond.

Typical steps in the disclosure process include initial submission, acknowledgment from the vendor, triage and validation, and reward determination. Researchers should expect communication timelines to include an initial acknowledgment, follow up for clarification, and later confirmation of a reward decision.

Permitted and prohibited testing

  • Permitted: interactive testing of GPT-5 via approved interfaces, simulated prompts, and clear logs that show model behavior.
  • Prohibited: any actions that enable real world biological harm, such as using model output to run or develop experiments.
  • Prohibited: public disclosure of exploit details before OpenAI confirms mitigation and a safe disclosure timeline.

Ethical and legal safeguards

OpenAI states the program includes safeguards to prevent misuse and protect researchers. These are intended to keep testing within digital environments, to require private reporting, and to avoid encouraging harmful actions. The program also expects researchers to follow local laws and institutional rules.

The emphasis on responsible disclosure is a safety measure. It helps ensure vulnerabilities are patched before they can be exploited. It also reduces risk to researchers who might otherwise be exposed to legal or ethical issues by publishing sensitive findings prematurely.

Why bio-related jailbreaks are especially sensitive

Biosafety-related jailbreaks can produce information that lowers barriers to biological misuse. Unlike generic text hacks, outputs that instruct how to culture organisms or synthesize compounds can pose real world risk if acted on.

AI models combine knowledge with instruction following. When a model is coaxed into giving procedural biological details, the result can be actionable guidance at odds with public health protections. That is why testing focuses on preventing those outputs from appearing at all.

Technical context: model capabilities and safety mitigations

Large language models like GPT-5 are trained on broad data and can generalize. The same generalization that helps with legitimate tasks can be exploited by carefully designed prompts. Safety mitigations sit on two planes. One plane is training time work that shapes model behavior. The other plane is runtime filters and guardrails, which attempt to detect and block disallowed content as it is generated.

A universal jailbreak prompt targets the runtime and behavioral protections. It tries to trick the model into responding in a way that bypasses filters. That makes it useful to test, because defending against a universal prompt can strengthen multiple layers of security.

Precedents and expected impact

Bug bounties in software and AI have a track record of surfacing real weaknesses. Past programs have led to fixes for prompt injection, privacy leaks, and other safety problems. OpenAI and other organizations have used external crowdsourcing to find issues that internal teams missed.

The intended result is improved model safety and more robust deployment policies. Rewards and transparent triage can encourage skilled researchers to report vulnerabilities privately and responsibly.

Implications for biosecurity and public health risk management

This bounty matters beyond the AI community. If models can be coaxed into producing harmful biological guidance, there is potential for misuse that affects public health. Finding and fixing those weaknesses helps reduce that risk.

The program also contributes to AI governance. It signals a willingness to work with external experts on safety, and to provide a structured path for reporting sensitive issues rather than leaving discovery and disclosure to chance.

Practical advice for researchers who want to participate

If you are a qualified security or biosafety researcher, here are steps to prepare effective reports and increase your chance of reward.

Preparation and documentation

  • Confirm eligibility with the program rules and your institution. Only perform tests that the rules allow.
  • Design reproducible test cases. Keep inputs and outputs clear, and include timestamps and environment details so others can validate the finding.
  • Capture logs and step by step interaction records. Screenshots are useful, but machine readable transcripts are better for triage.
  • Explain the risk. Describe how the model output could be used in the real world without revealing actionable steps unnecessarily.

Reporting tips

  • Report privately through the official bug bounty submission channel rather than posting public examples.
  • Be concise and clear. Present the exploit, the impact, and how easily it is reproduced.
  • Include mitigation suggestions if you have them. Practical, low risk ideas can speed fixes.

What this signals about OpenAI’s approach

By offering a dedicated bounty for biological safety, OpenAI is acknowledging the unique risks tied to bio-related outputs. The program signals a preference for collaborative testing with outside experts, and for paying for high quality findings rather than relying solely on internal review.

Financial incentives make participation more attractive, while a clear disclosure process reduces the risk of premature public release. Together, these steps aim to bring more skilled reviewers into the safety cycle.

Call to action

Qualified researchers who want to participate should find the GPT-5 Bio Bug Bounty page on OpenAI’s website, read the eligibility and rules carefully, and follow the responsible disclosure instructions for private reporting. Keep test methods safe, document findings clearly, and do not publish exploit details before OpenAI confirms mitigation.

Key takeaways

  • OpenAI opened a GPT-5 Bio Bug Bounty aimed at security and biosafety researchers.
  • Rewards go up to $25,000 for valid findings, including demonstrations involving a universal jailbreak prompt.
  • Responsible disclosure and ethical safeguards are central. Public disclosure before vendor triage is not permitted.
  • Bio-related jailbreaks are sensitive because model outputs could enable real world biological misuse.
  • Skilled researchers can help reduce public risk by reporting reproducible issues through the program.

FAQ

Who can submit findings?

The program targets qualified security and biosafety researchers. Check the official program rules for full eligibility details.

How much can I earn?

OpenAI states rewards of up to $25,000 depending on severity and novelty. The actual amount will depend on the assessment during triage.

Can I publish my results?

No. Public release of exploit details is not allowed until OpenAI completes a responsible disclosure process. Private reporting is required first.

What should I avoid testing?

Avoid any testing that requires or results in real world biological experiments. Do not use outputs to carry out lab work. Keep testing strictly digital and within the allowed boundaries outlined by the program.

Conclusion

The GPT-5 Bio Bug Bounty is a focused effort to find and fix biological safety gaps in a major AI model. By inviting qualified external researchers and offering financial rewards, OpenAI is using community expertise to strengthen safeguards. For researchers, participation comes with responsibilities. Follow the rules, report privately, and document findings carefully. For the public, the program is a step toward reducing risks from misuse of AI in biological contexts, and toward better governance of powerful models.

Leave a comment