ChatGPT now generates Malware mutations

    ChatGPT, the AI-based chatbot developed by Microsoft, can answer anything and everything. However, can you imagine that chatbot assistance is also being used to create malware and its various mutations? Threat Intelligence company ‘WithSecure’ has discovered this activity and raised a red alert immediately.

    Tim West, the head of WestSecure, believes that the creation of malware through artificial intelligence will increase challenges for defenders.

    As the software is readily available without any limitations, malicious actors can deliberately use it, just as they are currently using remote access tools to break into corporate networks, said Mr. West.

    Stephen Robinson, the senior threat intelligence analyst at the company, stated that cyber criminals will start evolving their spread methodology of malware by duplicating activities carried out by legitimate businesses and distributing malware through masqueraded emails and messages.

    One such instance was observed online a couple of weeks ago when cyber criminals were caught spreading malicious software tools through Facebook, WhatsApp, and Instagram platforms. The security teams from the Meta-owned social media platform identified at least 10 malware families generated from ChatGPT and using messaging platforms to spread malware.

    In one such case, the threat actors created browser extensions that claimed to offer the services of the AI platform. However, they were actually tricking people into downloading malware like DuckTail, which has the capability to steal information from victims’ Facebook login sessions, including 2FA, location data, and other account details.

    Initially, Vietnam threat actors were suspected to be behind the incident. However, Cisco Talos, which had been tracking the hackers since September 2022, confirmed that the attack was the work of either Chinese or Russian hackers who were obscuring their activity to appear as if it originated from Vietnam.

    NOTE: As I always say, it is not the software that is at fault. Instead, it is the human mind that needs to be held responsible, as people can use AI-based technology for both creative and destructive purposes.

    Ad
    Naveen Goud
    Naveen Goud is a writer at Cybersecurity Insiders covering topics such as Mergers & Acquisitions, Startups, Cyber Attacks, Cloud Security and Mobile Security

    No posts to display