FTC starts data security probe on ChatGPT OpenAI

The Federal Trade Commission (FTC) has turned its attention towards ChatGPT, the conversational Chatbot developed by OpenAI and now owned by Microsoft, due to concerns regarding data privacy. The data watchdog has requested that the technology company submit a detailed report outlining how it manages the risks associated with its AI models and how it safeguards consumer data.

In April 2023, certain media outlets reported that payment information related to ChatGPT’s premium customers had been leaked online, causing alarm among its user base.
To compound matters, privacy advocates have recently filed complaints against the AI algorithm, alleging that it generates false, misleading, derogatory, and controversial results, potentially leading to unnecessary confusion among users.

In May of this year, this issue was brought before the Senate, and CEO Sam Altman provided some explanation to the Judiciary Subcommittee.

In light of the FTC’s recommendation, senators are now demanding access to classified information regarding the risks associated with the use of the Chatbot’s AI and how it addresses security concerns related to data storage, processing, and analysis.

Meanwhile, in unrelated news concerning the FTC, X Corp, the parent company of Twitter, has filed a motion in a California district court to terminate its agreement with the FTC regarding data security practices. The motion was filed in response to a consent order that Twitter had entered into with the data watchdog back in 2011, following the discovery of several data security breaches on the social media platform. In May of last year, Twitter was fined $150 million as a result. However, Elon Musk’s legal team has challenged the penalty and requested a reduction, which is currently under review.

Ad
Naveen Goud
Naveen Goud is a writer at Cybersecurity Insiders covering topics such as Mergers & Acquisitions, Startups, Cyber Attacks, Cloud Security and Mobile Security

No posts to display