ChatGPT Credentials Snagged By Infostealers On 225k Infected Devices

At least 225,000 sets of OpenAI credentials were put up for sale on the dark web last year, which could potentially be used to access sensitive data sent to ChatGPT.

ChatGPT accounts compromised by information stealer malware were discovered by researchers at Group-IB between January and October 2023, whose findings were published in Group-IB’s Hi-Tech Crime Trends Report 2023/2024 on Thursday.

The stolen credentials were part of logs offered for sale on dark web marketplaces, which came from devices infected with infostealers like LummaC2, Raccoon and RedLine. These malware tools search for and collect sensitive details stored on infected devices such as log-in credentials and financial information.

There was a 36% increase in leaked credentials for ChatGPT access between the first five months and last five months of Group-IB’s research, with more than 130,000 infected hosts discovered between June and October 2023 compared with just under 96,000 between January and May. The last month of the study saw the most thefts of OpenAI credentials, totaling 33,080 instances.

LummaC2 was the most common source of infostealer logs containing ChatGPT credentials between June and October 2023, with 70,484 cases, followed by Raccoon and RedLine with less than 23,000 cases each.

This is a shift from previous data from Group-IB that showed Raccoon as the most common stealer of OpenAI details by far (more than 78,000 infections), followed by Vidar and RedLine, between June 2022 and May 2023.

“Many enterprises are integrating ChatGPT into their operational flow. Employees enter classified correspondences or use the bot to optimize proprietary code,” Group-IB Head of Threat Intelligence Dmitry Shestakov said in a statement last year. “Given that ChatGPT’s standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials.”

Employees entering sensitive data, confidential documents into ChatGPT

These latest figures about ChatGPT account compromise come as more is being learned about enterprise employees’ risky use of generative AI.

A June 2023 report by LayerX found that 6% of enterprise employees have pasted sensitive data into gen AI applications at least once, and that 4% paste sensitive data into these applications “on a weekly basis.” Of these exposures, 43% included internal business data, 31% included source code and 12% included personally identifiable information (PII).

A similar June 2023 study by Cyberhaven, which focused specifically on ChatGPT, yielded similar results. That research found that 4.7% of employees pasted sensitive data into ChatGPT, and that incidents of confidential data leakage to ChatGPT per 100,000 employees increased by 60.4% between March 4 and April 15 of that year.

More recently, Menlo Security said in a February 2024 report that there was an 80% increase in attempts by enterprise employees to upload files to gen AI sites between July and December 2023. It’s possible this is due to OpenAI adding the ability for premium users to upload files directly to ChatGPT in October.

Notably, nearly 40% of attempted sensitive inputs to gen AI apps included confidential documents, according to Menlo Security, and more than half included PII.

OpenAI itself suffered a data leak in March 2023 due to a vulnerability that exposed 1.2% of ChatGPT Plus users’ names, email addresses and payment information.

Menlo Security recommends organizations take a layered approach to preventing sensitive information from leaking through gen AI use. This can include implementing copy-and-paste controls that prevent large amounts of text or known proprietary code from being pasted into input fields, and placing gen AI group-level security controls rather than blocking gen AI sites on a domain-by-domain basis.

READ MORE HERE