AI in Cybersecurity: Friend or Foe?

By Adam Geller, CEO, Exabeam [ Join Cybersecurity Insiders ]
37

How organizations can both leverage and defend against artificial intelligence (AI) in security operations. 

While AI has been around for many years and isn’t a new concept, the emergence of generative AI (GenAI) boosted by large language models (LLMs) has drastically changed conversations about AI globally. Before OpenAI’s public release of its GenAI tool ChatGPT, AI was often seen as a tool with limited intelligence and capability. Now, as new use cases with Generative AI continue to prove its expanded capability in areas like security and productivity, its adoption is beginning to span every industry as enterprise executives race to implement AI across their tech stacks and workflows. Companies like Google have also opened a path to experimenting with AI engineering through offerings like Bard and Vertex AI.

Right now, security teams are witnessing two different conversations around AI in cybersecurity:

  • First, AI’s potential for defense and all the ways enterprises can leverage its power to shore up security postures while streamlining operations.
  • Second, concerns regarding both privacy and accuracy, as well as how to defend against bad actors harnessing AI themselves.

In the grand scheme of it all, these conversations can be segmented into three categories:

  1. How to leverage AI in security operations
  2. How to secure AI while using it
  3. How to ultimately defend against AI-driven cyberattacks

Leveraging AI Tools for Security Operations 

Security teams are now asking the critical question, “How can we leverage AI to transform security operations?” More specifically, these teams are looking at GenAI’s uses for predictive analytics, aptitude in detection, investigative capability, workflow automation, and AI copilots.

Modern companies are collecting, storing, and even transporting massive amounts of data every day. The reality is any sensitive information like addresses, payment information, Social Security numbers, and names are considered security-relevant data. The sheer volume of this security-relevant information is too large to even fathom, but they’re collecting it nonetheless. With AI, a new realm of tools and resources opens up for security teams.

Machine learning (ML) is one of the best tools for accurately identifying patterns in these huge data stores, largely thanks to the mathematical approach it takes when discerning statistical anomalies. One example of ML succeeding is its ability to detect unexpected system access by a user because of their patterned behavior within the specific system. This ability to discern behavioral abnormalities could then be used to assign dynamic risk scores based on user activities that can help determine whether action should be taken to secure internal systems and networks.

Beyond this, there’s a major role for GenAI in support of a strong defense. Companies are challenged to make sense of the massive streams of security information they must manage while handling a shortage of qualified engineers. In 2024, expect to see cybersecurity tools adopt natural language “prompting” (similar to ChatGPT) into their core user interfaces. This will allow newer, less experienced security analysts to execute powerful, but complex search queries in seconds, and allow a CISO to make quick sense of the information coming out of their security operations center (SOC) by explaining complex data in simple, human language terms.

A Defense in Depth Strategy for Securing AI 

CISOs face a dual challenge: harnessing the productivity potential of GenAI while ensuring its secure deployment. While the benefits of GenAI can be immense, there’s a growing concern among companies about the risks it poses, particularly in terms of unintended training, data leakage, and the exposure of sensitive corporate information or personally identifiable information (PII).

In recent conversations with customers, a striking insight emerged: approximately three-quarters of CISOs have imposed bans on the use of GenAI tools within their organizations, citing security concerns. They are actively seeking strategies to secure these tools before fully integrating them into their business processes. The apprehension is rooted in the fear that GenAI tools, while powerful, might inadvertently learn and disclose confidential corporate secrets or sensitive customer data.

To navigate this complex terrain, companies should adopt a ‘defense in depth’ strategy, a layered approach to security that is well-established in other domains of data protection. This strategy involves not only leveraging traditional endpoint security and data loss prevention (DLP) tools but also integrating more advanced, AI-driven solutions such as user and entity behavior analytics (UEBA). UEBA plays a crucial role in providing a comprehensive view of how GenAI tools are being utilized within the organization. It goes beyond mere usage tracking, delving into the nuances of how these tools are employed and the nature of the data they interact with. By analyzing patterns of behavior, UEBA helps in identifying anomalies and potential risks, thereby enabling a more nuanced and informed assessment of the security posture.

Incorporating UEBA into the security framework allows organizations to understand the full spectrum of GenAI tool usage and its implications. This insight is invaluable for formulating a risk profile that is not just based on hypothetical scenarios but grounded in actual usage patterns. It enables CISOs to make informed decisions about deploying GenAI tools, ensuring that while the organization reaps the benefits of AI-driven productivity, it does not compromise on security.

Defending Against Adversaries with AI 

While AI isn’t the sole culprit for today’s increased levels of cybersecurity attacks, it will continue to gain strength as an enabler. Other productivity improvements like the shift to the public cloud,, have also increased the current threat landscape. As data infrastructure systems evolve, organizations continue to tackle problems like explosive and unmanaged data, expanded attack surfaces, and increased cases of stolen and compromised credentials and ransomware attacks. For every step forward, the industry faces two steps back. No matter where your data is, bad actors are working daily to figure out how to get access to it.

While we are still in the early stages of GenAI, both fears and promises of the technology will be tested for years to come.

Unfortunately, cyber adversaries are already abusing GenAI tools to enhance the destructive force of security threats. We’re continually seeing major data breaches make headlines, many of which utilize AI. Bad actors will continue developing AI-powered threats that will be increasingly more difficult to detect and prevent. Social engineering techniques combined with the power of GenAI, as just one example, can create persuasive phishing attacks as large data models mimic writing styles and vocal patterns.

Both AI and human adversaries are proving to be a relentless force for companies to defend against. Security teams need to be well-armed to defeat both.

Ad

No posts to display