Breaking India News Today | In-Depth Reports & Analysis – IndiaNewsWeek
  • Home
  • Nation
  • Politics
  • Economy
  • Sports
  • Entertainment
  • International
  • Technology
  • Auto News
Reading: Revolutionizing the Reporting of Critical Flaws in AI Systems
Share
Breaking India News Today | In-Depth Reports & Analysis – IndiaNewsWeekBreaking India News Today | In-Depth Reports & Analysis – IndiaNewsWeek
Search
  • Home
  • Nation
  • Politics
  • Economy
  • Sports
  • Entertainment
  • International
  • Technology
  • Auto News
© 2024 All Rights Reserved | Powered by India News Week
Researchers Propose a Better Way to Report Dangerous AI Flaws
Breaking India News Today | In-Depth Reports & Analysis – IndiaNewsWeek > Technology > Revolutionizing the Reporting of Critical Flaws in AI Systems
Technology

Revolutionizing the Reporting of Critical Flaws in AI Systems

March 16, 2025 4 Min Read
Share
SHARE

In late 2023, a group of independent researchers uncovered a significant glitch in OpenAI’s popular artificial intelligence model, GPT-3.5.

When instructed to repeat specific words a thousand times, the model began to endlessly repeat the word, then abruptly transitioned to generating nonsensical text and fragments of personal information sourced from its training data. This included parts of names, phone numbers, and email addresses. The research team that identified the issue collaborated with OpenAI to rectify the flaw before the information was made public. This incident is among a multitude of issues identified in major AI models in recent years.

In a proposal released today, over 30 notable AI researchers, including some who detected the flaw in GPT-3.5, contend that numerous other vulnerabilities affecting popular models are reported in problematic manners. They propose a new framework, supported by AI companies, that would grant external parties permission to examine their models and a method for publicly disclosing flaws.

“At the moment, it feels somewhat like the Wild West,” remarks Shayne Longpre, a PhD candidate at MIT and the primary author of the proposal. Longpre highlights that some so-called jailbreakers share their techniques for circumventing AI safeguards on the social media platform X, leaving both the models and their users vulnerable. Other jailbreaks are communicated to only one company, despite their potential impact on many. Moreover, some flaws, he notes, are kept under wraps due to concerns of being banned or facing legal repercussions for violating terms of service. “There are clear chilling effects and uncertainties,” he states.

Ensuring the security and safety of AI models is critically important, considering how widely the technology is utilized and its potential integration into countless applications and services. Powerful models require rigorous stress testing, or red teaming, as they may contain harmful biases, and certain prompts can cause them to bypass safeguards, yielding undesirable or dangerous outputs. These can incite vulnerable individuals to pursue harmful actions or assist malicious actors in creating cyber, chemical, or biological weapons. Some experts worry that models could aid cybercriminals or terrorists and might even turn against humans as they continue to evolve.

The authors recommend three primary measures to enhance the third-party flaw disclosure process: implementing standardized AI flaw reports to streamline reporting practices; for major AI firms to offer support to external researchers disclosing flaws; and establishing a system that facilitates the sharing of flaws among different providers.

This approach draws inspiration from the realm of cybersecurity, where legal protections and established protocols exist for external researchers to report bugs.

“AI researchers often lack clarity on how to disclose a flaw and may worry that their good faith efforts could expose them to legal risk,” explains Ilona Cohen, chief legal and policy officer at HackerOne, a firm that coordinates bug bounty programs and a coauthor of the report.

Currently, large AI companies perform extensive safety checks on AI models before their launch. Some even partner with external firms for additional scrutiny. “Do these companies have enough personnel to tackle all the concerns related to general-purpose AI systems, which are utilized by hundreds of millions in applications we haven’t even begun to imagine?” queries Longpre. Several AI companies have initiated AI bug bounty programs. However, Longpre cautions that independent researchers face the risk of breaching terms of service if they take it upon themselves to test powerful AI models.

TAGGED:EducationTechnology
Share This Article
Twitter Copy Link
Previous Article NZ vs PAK 1st T20I live streaming: When and where to watch New Zealand vs Pakistan in India? How to Watch New Zealand vs Pakistan 1st T20I Live in India
Next Article Diabetes: Breaking the gender barrier Shattering Diabetes Stereotypes: Gender Equality in Health
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Latest News

Multi-brand food services company Curefoods plans to raise ₹800 cr via IPO

Anand Rathi IPO Sees 0.05x Demand: Is Subscription Worth It?

September 23, 2025
Body of woman found near Uran railway station in  Navi Mumbai

Woman’s Body Discovered Near Uran Railway Station in Navi Mumbai

September 23, 2025
Hindutva vigilantes lynch Muslim man in Rajasthan after branding him cattle smuggler

Hindutva Vigilantes Kill Muslim Man in Rajasthan, Accuse Him of Cattle Smuggling

September 23, 2025
Kolkata flooded: Pandals submerged, streets underwater, cars stranded - videos show havoc

Kolkata Flooding: Pandals Underwater and Streets Swamped, Cars Left Stranded

September 23, 2025
Share Market Today Live Updates 23 September 2025: Stock to buy today: Gujarat Fluorochemicals (₹3,888) – BUY

Markets Dip as FII Outflows Rise; Gold Reaches All-Time High

September 23, 2025
War 2 X Review: Mixed reviews pour in for Hrithik Roshan and Jr NTR’s latest release

Mixed Reactions Surface for Hrithik Roshan and Jr NTR’s War 2

September 23, 2025

You Might Also Like

Emergency Braking Will Save Lives. Automakers Want to Charge Extra for It
Technology

Saving Lives: Why Automakers Are Charging Extra for Emergency Braking Technology

5 Min Read
How to Clean a Toaster Oven:  Tips and Tricks
Technology

Essential Tips and Tricks for Spotless Toaster Oven Cleaning

4 Min Read
The Beef Tallow Skin Care Trend Smells Like a Scam
Technology

The Beef Tallow Skincare Craze: A Potential Scam or a Hidden Gem?

4 Min Read
Editors at Science Journal Resign En Masse Over Bad Use of AI, High Fees
Technology

Mass Resignation of Editors at Science Journal Due to AI Misuse and High Fees

6 Min Read
Breaking India News Today | In-Depth Reports & Analysis – IndiaNewsWeek
Breaking India News Today | In-Depth Reports & Analysis – IndiaNewsWeek

Welcome to IndiaNewsWeek, your reliable source for all the essential news and insights from across the nation. Our mission is to provide timely and accurate news that reflects the diverse perspectives and voices within India.

  • Home
  • Nation News
  • Economy News
  • Politics News
  • Sports News
  • Technology
  • Entertainment
  • International
  • Auto News
  • Bookmarks
  • About us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Service
  • Home
  • Nation
  • Politics
  • Economy
  • Sports
  • Entertainment
  • International
  • Technology
  • Auto News
  • About us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Service

© 2024 All Rights Reserved | Powered by India News Week

Welcome Back!

Sign in to your account

Lost your password?