AI In Cybersecurity: Exploring the Opportunities and Dangers

By Madhu Shashanka [ Join Cybersecurity Insiders ]
1386

By Dr. Madhu Shashanka, Chief Data Scientist and Co-Founder, Concentric AI

If you’ve been keeping with the news surrounding generative artificial intelligence (AI), you’re probably in one of two camps – optimistic or concerned. In the rapidly evolving world of cybersecurity and new technologies, generative AI is no different: It carries great potential along with equal measures of apprehension. When it comes to cybersecurity, while AI offers groundbreaking advantages in automating and enhancing security measures, it also introduces new challenges that could potentially exacerbate existing risks and create new ones.

Let’s explore the dangers and opportunities AI brings to the cybersecurity table.

The Opportunity: Does AI Improve Cybersecurity?

Before we answer this question, lets take a step back and first review the types of use cases that are ideally suited for AI. Any task that is hard to summarize or capture in terms of rules but can be fairly easily accomplished by a human is a good candidate. A task is a great candidate for AI when something must be done at scale, repeatedly, and millions of times. For example, reviewing an email to determine if it is spam or analyzing a medical image for a tumor are tasks that AI can handle efficiently. Groups of AI use cases include:

Augmenting human expertise – Within cybersecurity, I see tremendous opportunity for using AI to enable higher productivity and reduce risk due to human errors. Whether its in a SOC environment helping with threat hunting or incident response, or day-to-day operations of the cybersecurity team, AI can add real value by ingesting data and providing professionals with better context and automating routine tasks. The key is to use AI to make experts more productive, especially when the cost of making an error is too high.

Precision or recall – It is important to evaluate if the use case under consideration is precision-driven or recall-driven. Depending on where the use case falls, however, you might have to choose different modeling approaches.

For example, high recall – finding everything that needs to be found – often comes at the cost of low precision, meaning higher false positives. In threat detection, achieving high recall is crucial to ensuring that no potential threat is overlooked.

In the scenario of finding unknown unknowns, the approach of utilizing anomaly detection and related techniques can yield real results; but while most of the detected anomalies are valid mathematically, they tend to have benign explanations. It is harder to operationalize such tools because of false positives. When faced with unknown unknowns, it is recommended to prioritize the detection of a handful of patterns or behaviors of interest and modify the problem into one of known unknowns.

The Generative AI revolution – Generative AI has significantly impacted cybersecurity by increasing the sophistication of results and changing cost economics. It can create human-quality samples – audio, video, text – that are hard to discern from human-generated outputs. This opens up opportunities for eliminating the need to learn tool-specific interfaces, thereby lowering the barriers to entry and increasing efficiencies by automating rote work.

The Danger: bad actors, ethics, costs…

The Danger: Sophistication of Attacks

Unfortunately, the same AI technologies that defend can also be weaponized. Generative AI can create legitimate looking deepfakes or generate phishing emails that are increasingly difficult to distinguish from genuine communications. With the sophistication of social engineering attacks is on the rise, users will find it much more of a challenge to differentiate real messages from fake ones. Deepfakes can also render some of the biometrics-based authentication technologies such as voice identification ineffective. This is a ticking time bomb waiting to be exploited.

The Cost of Mistakes

AI is not without its flaws. When mistakes occur, especially in a field as critical as cybersecurity, the costs can be monumental. This is why a human-in-the-loop approach is often recommended, a model of interaction in which a machine process and a human operator work in tandem, each contributing to the system’s effective functioning, especially when the cost of errors is so high. For example, while AI-generated code can speed up development, it can also introduce new vulnerabilities that a human expert would need to catch. This cost should be the driving consideration in how you approach and design the solution.

But even well-intentioned users utilizing AI can lead to increased risk for an organization. The problem with the threat surface increase from AI-generated code vulnerabilities occurs because there are no guardrails for how you can use outputs of AI.

Ethical and Operational Challenges

AI in cybersecurity comes with its own set of ethical and operational challenges, too. Issues like model bias, data privacy, IP ownership, and the high operational costs of implementing AI solutions can be prohibitive for smaller organizations. These challenges must be methodically addressed to effectively harness AI’s full potential.

Also, we cannot overlook the fact that leveraging the latest and the greatest advances from generative AI is not easy to accomplish. Operational costs, including qualified personnel and compute costs, can become an impediment for many companies that dont have significant resources or the ability to invest for the long term.

There are emerging cybersecurity startups that have effectively applied generative AI technologies to targeted problems such as data security, and there is also an emerging open-source ecosystem for AI models. While the tide is slowly changing, the technology is still mostly controlled and driven by a handful of large enterprises.  

The Balanced Approach: Striking the Right Chord

AI should not be viewed as a magic bullet for every problem, but as one tool in the cybersecurity toolbox. A balanced approach that combines AI’s computational power with human expertise can yield the most effective security solutions. Cybersecurity is a dynamic field where threat actors are constantly adapting, and defenses have to adapt and improve as well. The focus should be on overall risk mitigation rather than solely relying on AI.

Ultimately, AI in cybersecurity is a double-edged sword. While it offers incredible opportunities for innovation and efficiency, it also opens the door to new kinds of risks that organizations must carefully manage. By understanding both the dangers and opportunities, we can better prepare for a future where AI plays an increasingly central and productive role.

Ad

No posts to display