Generative AI: The Unseen Insider Threat

By Steve Povolny [ Join Cybersecurity Insiders ]
1937

by Steve Povolny, Director, Security Research at Exabeam

Artificial intelligence, or AI, as it’s commonly known, is all the rage these days. The widespread availability of free generative AI tools like ChatGPT has allowed the technology to be embraced by the average person for many purposes, ranging from responding to emails to updating a résumé, writing code, or even developing movie scripts. While it may seem novel to some, AI has actually been around since the mid-1950s. In the nearly 70 years since then, AI is now rapidly transforming the world – and the security industry is no exception.

Leading this cybersecurity evolution are generative AI and natural language processing (NLP). For security professionals in the SOC, generative AI can create consequential and actionable security content conclusions. NLP, for its part, can greatly improve the user experience for searching, dashboarding, event correlation, and more. There are features of AI that provide benefits to various roles inside the SOC, and the prospect of streamlining or augmenting human capabilities to increase security throughout the enterprise is exciting.

However, there is a dark side to this innovation. Having generative AI search engines so accessible means that many of your employees and customers could be sharing sensitive data into these functions. What many people haven’t considered is, like any other software, AI search engines can be compromised. If and when this happens, it can lead to a major headache for your organization due to the rise in risk – which can be accelerated by insider threats. In this article, we’ll explore generative AI’s role in insider threats, as well as what organizations can do to protect against the dangers.

Same problems, evolving applications

Phishing attacks have plagued organizations since the early aughts when cybercriminals became more organized around the technique of using email to deceive users into clicking malicious links and providing sensitive information or credentials to accounts. Unfortunately, generative AI can make this routinely successful cyberattack even more effective by producing persuasive missives, rendering them nearly imperceptible as fraudulent, and allowing criminals to dramatically improve their rate of success.

Of course, there is great irony in using generative AI in social engineering attacks, too. Typically social engineering relies on human interaction to be carried out, but generative AI can take the human out of the loop, making it harder for unsuspecting victims to determine if they are dealing with a legitimate user who can be trusted. Generative AI has already led to widespread misinformation and the creation of fake profiles on social media. Specifically, deepfake technology, which generates realistic images, videos, or audio content impersonating a trusted individual, can easily manipulate other users, resulting in unauthorized access to information or the transfer of sensitive information that leads to theft or extortion.

We know from experience that threat actors do not lie in wait – they are working quickly to find new schemes and novel ways to compromise people using AI. The best defense is knowledge. Train your staff on being a critical defense layer to your organization. Regard all new employees as vulnerabilities until they are fully trained and aware of policies for proper AI usage in the workplace. Teach them to report suspicions of deep fakes or phishing to security teams immediately, and deploy stronger authentication methods, so that it is more difficult for cybercriminals to impersonate staff. Regarding devices and systems, implementing a rapid and effective patching policy and inventory management systems can provide the awareness and responsiveness needed to deal with modern threats.

Don’t hate the player. Hate the game.

Another issue that we must consider with generative AI is its ability to compromise computing systems through the use of generated artificial data. “Adversarial” generated content can be weaponized to manipulate system behavior, fool advanced classification systems, or launch attacks on systems, leaving an organization vulnerable to a breach, data leak, and other harmful security risks.

Consider, for example, how impactful this situation becomes when combined with faux identities. A generated fake identity, such as an image, video, or social media account could be leveraged to access sensitive information and bypass identity-based security measures.

Taking this a step further, machine learning can be used for progressive evasion of security systems and pose a significant threat, especially when it comes to launching AI-driven malware. AI-generated malware can change swiftly, based on a target or environment – which is exactly what makes it much harder to detect and defend against.

The weaponization of AI works both ways

The good news is that, just as criminals are embracing AI, so are defenders. There are many benefits to using AI for each role within an organization, including security engineers, SOC analysts, and even the CISO. One of those benefits is automation, which empowers security teams by freeing them so they can then focus on more complex tasks, like designing and implementing new security measures.

AI can also identify security incidents more quickly and accurately, making a security analyst’s job easier by eliminating some of the noise associated with false positives. Because of this, an analyst can respond to incidents more effectively and reduce the risk of successful attacks.

Threat hunters can use AI outputs to achieve higher fidelity detections, improve search capabilities and experience, and natural language processing (NLP) can simplify the explanation of complex threats, improving hunting capabilities. SOC managers will understand threats more easily, and use natural language to search, develop playbooks, and generate and interpret dashboards.

Finally, CISOs taking advantage of AI can gain a better understanding of their organization’s security posture and make more informed decisions about where resources are needed to address security incidents and vulnerabilities.

For those of us working in cybersecurity, the growth of attacks using generative AI might be novel – but the constant need to adjust our protection methods in an ever-evolving threat landscape is not. We are accustomed to adapting our approach, improving the technologies, and evolving our defenses to protect from the newest menace. This is what we must do now to address the growing threats of AI-driven attacks. We must invest in research, collaborate with policymakers and other cybersecurity experts, and develop new tools that neutralize malicious uses of AI.

Ad

No posts to display