In late 2023, a group of independent researchers uncovered a significant glitch in OpenAI’s popular artificial intelligence model, GPT-3.5.
When instructed to repeat specific words a thousand times, the model began to endlessly repeat the word, then abruptly transitioned to generating nonsensical text and fragments of personal information sourced from its training data. This included parts of names, phone numbers, and email addresses. The research team that identified the issue collaborated with OpenAI to rectify the flaw before the information was made public. This incident is among a multitude of issues identified in major AI models in recent years.
In a proposal released today, over 30 notable AI researchers, including some who detected the flaw in GPT-3.5, contend that numerous other vulnerabilities affecting popular models are reported in problematic manners. They propose a new framework, supported by AI companies, that would grant external parties permission to examine their models and a method for publicly disclosing flaws.
“At the moment, it feels somewhat like the Wild West,” remarks Shayne Longpre, a PhD candidate at MIT and the primary author of the proposal. Longpre highlights that some so-called jailbreakers share their techniques for circumventing AI safeguards on the social media platform X, leaving both the models and their users vulnerable. Other jailbreaks are communicated to only one company, despite their potential impact on many. Moreover, some flaws, he notes, are kept under wraps due to concerns of being banned or facing legal repercussions for violating terms of service. “There are clear chilling effects and uncertainties,” he states.
Ensuring the security and safety of AI models is critically important, considering how widely the technology is utilized and its potential integration into countless applications and services. Powerful models require rigorous stress testing, or red teaming, as they may contain harmful biases, and certain prompts can cause them to bypass safeguards, yielding undesirable or dangerous outputs. These can incite vulnerable individuals to pursue harmful actions or assist malicious actors in creating cyber, chemical, or biological weapons. Some experts worry that models could aid cybercriminals or terrorists and might even turn against humans as they continue to evolve.
The authors recommend three primary measures to enhance the third-party flaw disclosure process: implementing standardized AI flaw reports to streamline reporting practices; for major AI firms to offer support to external researchers disclosing flaws; and establishing a system that facilitates the sharing of flaws among different providers.
This approach draws inspiration from the realm of cybersecurity, where legal protections and established protocols exist for external researchers to report bugs.
“AI researchers often lack clarity on how to disclose a flaw and may worry that their good faith efforts could expose them to legal risk,” explains Ilona Cohen, chief legal and policy officer at HackerOne, a firm that coordinates bug bounty programs and a coauthor of the report.
Currently, large AI companies perform extensive safety checks on AI models before their launch. Some even partner with external firms for additional scrutiny. “Do these companies have enough personnel to tackle all the concerns related to general-purpose AI systems, which are utilized by hundreds of millions in applications we haven’t even begun to imagine?” queries Longpre. Several AI companies have initiated AI bug bounty programs. However, Longpre cautions that independent researchers face the risk of breaching terms of service if they take it upon themselves to test powerful AI models.