OpenAI
Cybercriminal Pilfering OpenAI Credentials

Cybercriminals increasingly lean towards generative artificial intelligence tools as a new weapon of choice. An alarming number of OpenAI credentials, running into hundreds of thousands, are up for sale on the dark web. Additionally, a malicious counterpart to ChatGPT is gaining momentum.

Cybercriminals Finding Tools to Create Persuasive Phishing Emails

The tools above hold equal appeal for both seasoned and novice cyber miscreants. The exploitation is to fabricate more believable phishing emails. This is to the target audience, heightening the odds of a successful breach.

Dark Web Users Swoop on GPT OpenAI Credentials

According to data sourced from Flare, a threat exposure management company, and shared with BleepingComputer, over the last six months, users on the dark web and Telegram have discussed ChatGPT, OpenAI’s artificial intelligence chatbot, more than 27,000 times.

While scouring dark web forums and marketplaces, Flare researchers observe that OpenAI credentials are a hot item on dark web. More than 200,000 OpenAI credentials have been spotted on sale in the form of stealer logs.

While the figure seems marginal against an estimate of 100 million active users in January, it underscores the fact that threat actors perceive the potential for misuse in generative AI tools.

ChatGPT (OpenAI Credentials) Accounts Become Cybercriminals’ New Playground

Earlier in June, a report from Group-IB, a cybersecurity company, reveals that dark web illicit marketplaces traded stealer logs from info-stealing malware containing over 100,000 ChatGPT accounts.

The attraction towards these utilities among cybercriminals has peaked to such an extent that a ChatGPT clone named WormGPT has been developed, specifically training on malware-related data. The tool is touted as the “best GPT alternative for blackhat” and is described as a ChatGPT alternative “that lets you do all sorts of illegal stuff.”

WormGPT: Built on GPT-J, Aiding BEC Attacks

The WormGPT, which relies on the GPT-J open-source large language model developed in 2021, can produce human-like text. While claiming that the tool has been trained on a diverse set of data focusing on malware-related information, its developer has given no insight into the specific datasets.

SlashNext, an email security provider, got its hands on WormGPT and conducted a few tests to ascertain the potential threat it embodies. The focus of the researchers was primarily on creating messages suitable for business email compromise (BEC) attacks.

In a particular experiment, WormGPT was instructed to concoct an email that would coax an unsuspecting account manager into paying a fraudulent invoice. The outcome was disquieting. WormGPT churned out an impressively persuasive and tactically astute email, revealing its potential for complex phishing and BEC attacks.

The Advantages and Challenges of AI in BEC Attacks

In light of their analysis, SlashNext researchers noted that generative AI could enhance a BEC attack. Alongside providing “impeccable grammar” that lends credibility to the message, it could also assist less capable attackers to execute attacks. This surpass their level of sophistication.

However, defending against this budding menace might be challenging. Organizations can gear up for this. By training their employees to scrutinize messages demanding urgent attention, especially those with a financial aspect. Enhancing email verification processes can also help by sending alerts for messages from outside the organization. It also help flagging keywords typically associated with a BEC attack.