![Malware Cybersecurity Insiders](https://www.cybersecurity-insiders.com/wp-content/uploads/Malware-1-696x398.jpeg)
In recent years, we’ve observed a disturbing trend where hacking groups and threat actors from China have consistently targeted Western adversaries with cyberattacks. These attacks, whether politically or economically motivated, have often been linked to government or military intelligence support. The situation, however, has now taken a somewhat unexpected turn, but remains largely advantageous for Chinese hackers.
According to research from Check Point, cybercriminals are now leveraging advanced tools like large language models (LLMs) to create sophisticated malware, such as ransomware, and to conduct phishing attacks. What’s particularly alarming is that many of these hackers are turning to lesser-known tools that haven’t garnered much media attention, like the Alibaba Qwen LLM and DeepSeek AI, both of which have gained favor among malicious actors.
This raises a pressing question: Is there a legal framework in place to prevent the unauthorized use of LLMs in such harmful activities?
Unfortunately, there is currently no specific law that addresses the unauthorized use of AI tools like these, especially when it comes to geographical limitations. The lack of a globally coordinated approach means that if companies or developers are cornered in one jurisdiction, they can simply relocate their operations to another region or country that may have more lenient regulations. This leaves a significant gap in the ability to prevent the misuse of AI for malicious purposes.
So, who should take responsibility for preventing such misuse?
The onus falls squarely on the developers and businesses that create and control these powerful tools. To curb the potential for abuse, companies must reconsider the open-source nature of LLMs. By restricting access and making these tools available only through official logins or verified platforms, it becomes possible to track and monitor how these models are being used and to identify any malicious activities. Such restrictions would allow for greater accountability and help minimize the damage caused by these advanced technologies when misused.
In this regard, the Chinese government, led by President Xi Jinping, should take a more proactive stance in regulating the use of its LLMs. It must put strict measures in place to prevent these models from being exploited to develop malware or engage in phishing schemes. Without such safeguards, the risks of global security breaches and cybercrime will continue to rise.
Alternatively, if China fails to regulate the use of its AI platforms effectively, the international community must take a stronger stand. One potential course of action could be imposing bans on AI platforms that facilitate cybercrime. In fact, this is already happening with certain AI technologies.
For example, DeepSeek AI has been banned in multiple countries, including Texas, Taiwan, and India. Additionally, financial authorities in countries like Italy, France, the European Union, and Australia have followed suit, imposing restrictions on the use of these platforms in an effort to curb their potential for harm.
This growing trend of banning malicious AI tools should serve as a warning. The international community must come together to ensure that AI technology is used for the benefit of society, not to fuel cybercrime and malicious activities. If developers and governments do not take responsibility, the world could face even greater threats from the misuse of AI.