
"AI-specific safety scenarios covered by the new program include third-party prompt injection and data exfiltration attacks, disallowed actions performed by agentic OpenAI products on the company's website at scale, and other harmful actions performed by the products."
"Researchers are encouraged to identify abuse risks in agentic OpenAI products that perform actions on behalf of the user or access data as the user, including Atlas Browser, Codex, Operator, Connectors, and other ChatGPT tools."
"Researchers may earn up to $7,500 for reports that detail consistently reproducible issues of high severity, and which include a clear set of recommended steps or mitigations."
OpenAI's new public safety bug bounty program addresses AI-specific abuse and safety risks, complementing its existing security program. It covers issues like prompt injection, data exfiltration, and harmful actions by OpenAI products. The program accepts submissions related to proprietary information exposure and account integrity weaknesses. Researchers can earn rewards for identifying flaws that lead to user harm, with a maximum payout of $7,500 for high-severity issues. The program operates on Bugcrowd and includes additional rules for design and implementation flaws.
Read at SecurityWeek
Unable to calculate read time
Collection
[
|
...
]