Breaking India News Today | In-Depth Reports & Analysis – IndiaNewsWeek
  • Home
  • Nation
  • Politics
  • Economy
  • Sports
  • Entertainment
  • International
  • Technology
  • Auto News
Reading: Revolutionizing the Reporting of Critical Flaws in AI Systems
Share
Breaking India News Today | In-Depth Reports & Analysis – IndiaNewsWeekBreaking India News Today | In-Depth Reports & Analysis – IndiaNewsWeek
Search
  • Home
  • Nation
  • Politics
  • Economy
  • Sports
  • Entertainment
  • International
  • Technology
  • Auto News
© 2024 All Rights Reserved | Powered by India News Week
Researchers Propose a Better Way to Report Dangerous AI Flaws
Breaking India News Today | In-Depth Reports & Analysis – IndiaNewsWeek > Technology > Revolutionizing the Reporting of Critical Flaws in AI Systems
Technology

Revolutionizing the Reporting of Critical Flaws in AI Systems

March 16, 2025 4 Min Read
Share
SHARE

In late 2023, a group of independent researchers uncovered a significant glitch in OpenAI’s popular artificial intelligence model, GPT-3.5.

When instructed to repeat specific words a thousand times, the model began to endlessly repeat the word, then abruptly transitioned to generating nonsensical text and fragments of personal information sourced from its training data. This included parts of names, phone numbers, and email addresses. The research team that identified the issue collaborated with OpenAI to rectify the flaw before the information was made public. This incident is among a multitude of issues identified in major AI models in recent years.

In a proposal released today, over 30 notable AI researchers, including some who detected the flaw in GPT-3.5, contend that numerous other vulnerabilities affecting popular models are reported in problematic manners. They propose a new framework, supported by AI companies, that would grant external parties permission to examine their models and a method for publicly disclosing flaws.

“At the moment, it feels somewhat like the Wild West,” remarks Shayne Longpre, a PhD candidate at MIT and the primary author of the proposal. Longpre highlights that some so-called jailbreakers share their techniques for circumventing AI safeguards on the social media platform X, leaving both the models and their users vulnerable. Other jailbreaks are communicated to only one company, despite their potential impact on many. Moreover, some flaws, he notes, are kept under wraps due to concerns of being banned or facing legal repercussions for violating terms of service. “There are clear chilling effects and uncertainties,” he states.

Ensuring the security and safety of AI models is critically important, considering how widely the technology is utilized and its potential integration into countless applications and services. Powerful models require rigorous stress testing, or red teaming, as they may contain harmful biases, and certain prompts can cause them to bypass safeguards, yielding undesirable or dangerous outputs. These can incite vulnerable individuals to pursue harmful actions or assist malicious actors in creating cyber, chemical, or biological weapons. Some experts worry that models could aid cybercriminals or terrorists and might even turn against humans as they continue to evolve.

The authors recommend three primary measures to enhance the third-party flaw disclosure process: implementing standardized AI flaw reports to streamline reporting practices; for major AI firms to offer support to external researchers disclosing flaws; and establishing a system that facilitates the sharing of flaws among different providers.

This approach draws inspiration from the realm of cybersecurity, where legal protections and established protocols exist for external researchers to report bugs.

“AI researchers often lack clarity on how to disclose a flaw and may worry that their good faith efforts could expose them to legal risk,” explains Ilona Cohen, chief legal and policy officer at HackerOne, a firm that coordinates bug bounty programs and a coauthor of the report.

Currently, large AI companies perform extensive safety checks on AI models before their launch. Some even partner with external firms for additional scrutiny. “Do these companies have enough personnel to tackle all the concerns related to general-purpose AI systems, which are utilized by hundreds of millions in applications we haven’t even begun to imagine?” queries Longpre. Several AI companies have initiated AI bug bounty programs. However, Longpre cautions that independent researchers face the risk of breaching terms of service if they take it upon themselves to test powerful AI models.

TAGGED:EducationTechnology
Share This Article
Twitter Copy Link
Previous Article NZ vs PAK 1st T20I live streaming: When and where to watch New Zealand vs Pakistan in India? How to Watch New Zealand vs Pakistan 1st T20I Live in India
Next Article Diabetes: Breaking the gender barrier Shattering Diabetes Stereotypes: Gender Equality in Health
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Latest News

Kerala: Chhattisgarh migrant worker lynched after being labelled thief and asked “are you Bangladeshi?”

Kerala Leaders Denounce BJP’s Hate Politics After Arrests in Dalit Worker Lynching

December 22, 2025
Brendon McCullum states his future with Test cricket team not in his hands after Ashes horror

Brendon McCullum Reflects on Uncertain Future with Test Team Post-Ashes Struggles

December 22, 2025
Athar Hussain’s horrific ‘killing’ third reported case of lynching this year in Bihar’s Nawada

Athar Hussain’s horrific ‘killing’ third reported case of lynching this year in Bihar’s Nawada make unique title from original. The maximum number of words is 16.

December 22, 2025
ILM moment: Erosion of right to religious expression of Muslims in India

Navigating the Muslim Political Landscape: Conscience vs. Power in Contemporary Challenges

December 22, 2025
2025 in Gaza: 12 months, 12 pictures

2025 in Gaza: A Year Captured in 12 Striking Images

December 22, 2025
Shubman Gill, Abhishek, Arshdeep picked in Punjab's squad for Vijay Hazare Trophy, no captain named

Shubman Gill, Abhishek, Arshdeep in Punjab’s Vijay Hazare Trophy Squad, Captain Yet to Be Named

December 22, 2025

You Might Also Like

San Francisco Mayor Daniel Lurie: Past Leaders Took the City ‘for Granted’
Technology

San Francisco’s Mayor Lurie: Past Leaders Overlooked City’s True Potential

2 Min Read
Student writing exam.
Technology

UPPCS exam rescheduled for mid-December.

1 Min Read
Meta Ditches Fact-Checkers in Favor of X-Style Community Notes
Technology

Meta Replaces Fact-Checkers with Community Notes System Like X

6 Min Read
Eight Sleep Pod 4 Review: Sleep Better
Technology

Revitalize Your Rest: A Comprehensive Review of the Eight Sleep Pod 4

5 Min Read
Breaking India News Today | In-Depth Reports & Analysis – IndiaNewsWeek
Breaking India News Today | In-Depth Reports & Analysis – IndiaNewsWeek

Welcome to IndiaNewsWeek, your reliable source for all the essential news and insights from across the nation. Our mission is to provide timely and accurate news that reflects the diverse perspectives and voices within India.

  • Home
  • Nation News
  • Economy News
  • Politics News
  • Sports News
  • Technology
  • Entertainment
  • International
  • Auto News
  • Bookmarks
  • About us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Service
  • Home
  • Nation
  • Politics
  • Economy
  • Sports
  • Entertainment
  • International
  • Technology
  • Auto News
  • About us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Service

© 2024 All Rights Reserved | Powered by India News Week

Welcome Back!

Sign in to your account

Lost your password?