According to a recent report by cybersecurity firm SlashNext, a newly developed AI tool called WormGPT is being employed by cybercriminals to launch business email compromise (BEC) attacks. WormGPT operates similarly to Microsoft’s popular conversational AI bot, ChatGPT, but with malicious intent.
Over the past few weeks, hackers have been advertising a jailbreak method for ChatGPT on various technology forums. These jailbreaks involve manipulating inputs to breach ChatGPT’s network framework, thereby coercing it into divulging sensitive information or generating inappropriate content.
The implications of training computers to infiltrate networks are significant. Such compromised systems can unwittingly produce misleading outputs and generate inappropriate content, posing substantial challenges in preventing cybercriminals from exploiting the platform.
Daniel Kelley, head of security research at SlashNext, highlights that BEC attacks orchestrated through WormGPT are particularly worrisome due to their refined grammar, which minimizes suspicion.
To address this emerging threat, organizations should proactively implement preventive measures that can automate the identification of BEC attacks. Simultaneously, it is crucial to educate employees about the risks associated with AI-based BEC threats and provide them with strategies to mitigate these risks effectively.
A comprehensive security approach should also include monitoring employees’ email handling behavior using AI-based tools, as part of an overall strategy to bolster the organization’s security posture.