OpenAI Is Paying Up to $20,000 For Users to Find Bugs In Its Programs The artificial intelligence company is rolling out a "Bug Bounty Program," where people can report vulnerabilities for cash rewards.
Opinions expressed by Entrepreneur contributors are their own.
Despite OpenAI's recent success — particularly with the widespread use of ChatGPT — the company's programs aren't perfect, and like any new technology, there are going to be bugs that need to be fixed.
This week, the artificial intelligence company announced it will be rolling out a "Bug Bounty Program" in partnership with Bugcrowd Inc., a cybersecurity platform. The program calls on security researchers, ethical hackers, and "technology enthusiasts" to assist in identifying and reporting problems (in exchange for cash) to help OpenAI address vulnerabilities in its technology.
"We invest heavily in research and engineering to ensure our AI systems are safe and secure," the company stated. "However, as with any complex technology, we understand that vulnerabilities and flaws can emerge. We believe that transparency and collaboration are crucial to addressing this reality."
We're launching the OpenAI Bug Bounty Program — earn cash awards for finding & responsibly reporting security vulnerabilities. https://t.co/p1I3ONzFJK
— OpenAI (@OpenAI) April 11, 2023
Compensation for identifying system problems can be anywhere from $200 to $6,500 based on vulnerability, with the maximum reward being $20,000. Each reward amount is based on "severity and impact" — ranging from "low-severity findings" ($200) to "exceptional discoveries" (up to $20,000).
Related: What Business Leaders Can Learn From ChatGPT's Revolutionary First Few Months
Before outlining the scope of vulnerabilities that OpenAI wants to identify (and the resulting rewards), the Bug Bounty participation page states: "STOP. READ THIS. DO NOT SKIM OVER IT" to tell users what kind of vulnerabilities can equal cash.
Examples of vulnerabilities that are "in-scope" and therefore eligible for reward are authentication issues, outputs that result in the browser application crashing, and data exposure. Safety issues that are "out of scope" and not eligible for a reward are jailbreaks and getting the system to "say bad things" to the user.
Screenshot of bugcrowd.com/openai.
Since launching the program, OpenAI has rewarded 23 vulnerabilities with an average payout of $1,054, as of Thursday morning.
The company also says that while the program allows for authorized testing, it does not exempt users from OpenAI's terms of service, and content violations could result in being banned from the program.