By Daniel Hofmann, CEO of Hornetsecurity
Large Language Models (LLM) and Generative AI technologies like ChatGPT, have brought significant benefits to businesses. However, the potential for misuse and accidental data exposure can lead to high costs for organisations. Recent incidents, such as Samsung’s sensitive data leak through generative AI, underscore the need for careful handling of information when using AI tools. With the rise in use of AI across industries, it’s more important than ever that businesses and employees are able to face the new cybersecurity challenges that this technology is bringing to the fore.
Unveiling New Attacks
Recent research revealed that 90% of all cyber-attacks start with phishing and more than 40% of all emails have the potential to pose a threat to any business. By exploiting generative AI models, malevolent actors can craft near-perfect and highly deceptive phishing emails; and with the right prompts, as we tested in our security labs, it is very simple to ask these models to create ransomware.
Additionally, the rise of ‘deep fakes’ and ‘DeepPhish’ techniques enables scammers to mimic voices and autonomously generate phishing emails that closely resemble genuine communications. AI-driven malware further complicates the security landscape, as it can learn from interactions with cloud environments to evade traditional security measures.
AI-empowered State Hackers
The spread of generative AI is not only a concern for businesses, but it also poses a serious threat to critical infrastructure (CRITIS) and government agencies. Authoritarian states are increasingly using spear phishing attacks to compromise supply security, gather intelligence, and even steal cryptocurrencies.
Businesses and governments must act quickly to prepare and protect their employees and citizens from this new wave of AI-supported cyberattacks. Governments are using a whole host of different tactics to combat the rise in more sophisticated cyberattacks from nefarious states, such as setting up specific teams to disrupt terrorist groups and state hackers. However, having simple yet effective IT measures like email filters, firewalls, and network and data monitoring tools remains as vital as ever.
Guarding against Generative AI
To defend against the malicious use of generative AI and increase data protection and recovery, businesses must prioritise strong cybersecurity practices.
Firstly, companies need to make informed decisions regarding the pros and cons of using AI, namely whether its usage outweighs the risks. Internal policies on the usage of sensitive information with AI tools are a must if a company does not want to run the risk of accidentally divulging company secrets through the uncontrolled use of generative AI..
Secondly, investing in cybersecurity infrastructure, personnel, and tools is essential for staying ahead of evolving threats. While cloud services like Microsoft 365 offer some level of built-in protection against spam and malware, this tends to be entry-level, so organisations would do well to implement add-on solutions to enhance security.
Finally, and most importantly: it is essential to implement employee cybersecurity training. Simulated spear phishing attacks help to prepare and educate staff on identifying potential threats and taking the appropriate action because ultimately a business’s employee is the final line of defence. As enabling as AI is in the creation of targeted attacks, phishing is still phishing.
AI has the potential to reshape cybersecurity in ways that make malicious activity easier to execute and harder to detect. Responsible AI development and collective efforts of businesses in regard to cybersecurity can also fortify defences and protect critical infrastructure.