Breaking India News Today | In-Depth Reports & Analysis – IndiaNewsWeek
  • Home
  • Nation
  • Politics
  • Economy
  • Sports
  • Entertainment
  • International
  • Technology
  • Auto News
Reading: Revolutionizing the Reporting of Critical Flaws in AI Systems
Share
Breaking India News Today | In-Depth Reports & Analysis – IndiaNewsWeekBreaking India News Today | In-Depth Reports & Analysis – IndiaNewsWeek
Search
  • Home
  • Nation
  • Politics
  • Economy
  • Sports
  • Entertainment
  • International
  • Technology
  • Auto News
© 2024 All Rights Reserved | Powered by India News Week
Researchers Propose a Better Way to Report Dangerous AI Flaws
Breaking India News Today | In-Depth Reports & Analysis – IndiaNewsWeek > Technology > Revolutionizing the Reporting of Critical Flaws in AI Systems
Technology

Revolutionizing the Reporting of Critical Flaws in AI Systems

March 16, 2025 4 Min Read
Share
SHARE

In late 2023, a group of independent researchers uncovered a significant glitch in OpenAI’s popular artificial intelligence model, GPT-3.5.

When instructed to repeat specific words a thousand times, the model began to endlessly repeat the word, then abruptly transitioned to generating nonsensical text and fragments of personal information sourced from its training data. This included parts of names, phone numbers, and email addresses. The research team that identified the issue collaborated with OpenAI to rectify the flaw before the information was made public. This incident is among a multitude of issues identified in major AI models in recent years.

In a proposal released today, over 30 notable AI researchers, including some who detected the flaw in GPT-3.5, contend that numerous other vulnerabilities affecting popular models are reported in problematic manners. They propose a new framework, supported by AI companies, that would grant external parties permission to examine their models and a method for publicly disclosing flaws.

“At the moment, it feels somewhat like the Wild West,” remarks Shayne Longpre, a PhD candidate at MIT and the primary author of the proposal. Longpre highlights that some so-called jailbreakers share their techniques for circumventing AI safeguards on the social media platform X, leaving both the models and their users vulnerable. Other jailbreaks are communicated to only one company, despite their potential impact on many. Moreover, some flaws, he notes, are kept under wraps due to concerns of being banned or facing legal repercussions for violating terms of service. “There are clear chilling effects and uncertainties,” he states.

Ensuring the security and safety of AI models is critically important, considering how widely the technology is utilized and its potential integration into countless applications and services. Powerful models require rigorous stress testing, or red teaming, as they may contain harmful biases, and certain prompts can cause them to bypass safeguards, yielding undesirable or dangerous outputs. These can incite vulnerable individuals to pursue harmful actions or assist malicious actors in creating cyber, chemical, or biological weapons. Some experts worry that models could aid cybercriminals or terrorists and might even turn against humans as they continue to evolve.

The authors recommend three primary measures to enhance the third-party flaw disclosure process: implementing standardized AI flaw reports to streamline reporting practices; for major AI firms to offer support to external researchers disclosing flaws; and establishing a system that facilitates the sharing of flaws among different providers.

This approach draws inspiration from the realm of cybersecurity, where legal protections and established protocols exist for external researchers to report bugs.

“AI researchers often lack clarity on how to disclose a flaw and may worry that their good faith efforts could expose them to legal risk,” explains Ilona Cohen, chief legal and policy officer at HackerOne, a firm that coordinates bug bounty programs and a coauthor of the report.

Currently, large AI companies perform extensive safety checks on AI models before their launch. Some even partner with external firms for additional scrutiny. “Do these companies have enough personnel to tackle all the concerns related to general-purpose AI systems, which are utilized by hundreds of millions in applications we haven’t even begun to imagine?” queries Longpre. Several AI companies have initiated AI bug bounty programs. However, Longpre cautions that independent researchers face the risk of breaching terms of service if they take it upon themselves to test powerful AI models.

TAGGED:EducationTechnology
Share This Article
Twitter Copy Link
Previous Article NZ vs PAK 1st T20I live streaming: When and where to watch New Zealand vs Pakistan in India? How to Watch New Zealand vs Pakistan 1st T20I Live in India
Next Article Diabetes: Breaking the gender barrier Shattering Diabetes Stereotypes: Gender Equality in Health
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Latest News

IT stocks drag market amid H-1B visa fee worries, Sensex, Nifty down despite GST boost

IT Stocks Weigh on Market as H-1B Visa Fees Rise, Sensex Falls

September 23, 2025
Parents hoping to adopt orphans of Wayanad landslide may not have their wish granted

Adoption Hopes for Wayanad Landslide Orphans Face Major Setbacks

September 23, 2025
Apex body to oversee all modes of transport likely

Unified Oversight Committee Poised to Enhance All Transportation Modes

September 23, 2025
Share Market Today Live Updates 23 September 2025: Stock to buy today: Gujarat Fluorochemicals (₹3,888) – BUY

Today’s Top Buy: Gujarat Fluorochemicals at ₹3,888 – Expert Recommendation

September 23, 2025
Gold breaches ₹1.11 lakh/10 g, silver scales new peak in futures trade on bullish global cues

Gold Surpasses ₹1.11 Lakh/10g as Silver Soars on Global Rally

September 23, 2025
SC notice to cops on bail plea of Umar & others

SC Orders Police Response on Bail Request from Umar and Associates

September 23, 2025

You Might Also Like

Google Lifts a Ban on Using Its AI for Weapons and Surveillance
Technology

Google Revokes Restrictions on AI Applications in Military and Surveillance Sectors

5 Min Read
Paytm boss Vijay Shekhar Sharma makes a new 'bh-ai' out of OpenAI founder Sam Altman!
Technology

Paytm’s Vijay Shekhar Sharma Creates Unique ‘bh-ai’ Inspired by OpenAI’s Sam Altman!

3 Min Read
Essential skills and strategies for young professionals in a tech-driven corporate landscape
Technology

Key Skills and Strategies for Young Professionals in a Tech-Driven Workplace

5 Min Read
These Maps Show Just How Dry Southern California Is Right Now
Technology

Visualizing the Severe Drought Conditions in Southern California

5 Min Read
Breaking India News Today | In-Depth Reports & Analysis – IndiaNewsWeek
Breaking India News Today | In-Depth Reports & Analysis – IndiaNewsWeek

Welcome to IndiaNewsWeek, your reliable source for all the essential news and insights from across the nation. Our mission is to provide timely and accurate news that reflects the diverse perspectives and voices within India.

  • Home
  • Nation News
  • Economy News
  • Politics News
  • Sports News
  • Technology
  • Entertainment
  • International
  • Auto News
  • Bookmarks
  • About us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Service
  • Home
  • Nation
  • Politics
  • Economy
  • Sports
  • Entertainment
  • International
  • Technology
  • Auto News
  • About us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Service

© 2024 All Rights Reserved | Powered by India News Week

Welcome Back!

Sign in to your account

Lost your password?