By Benjamin Preminger, Senior Product Manager, Cybersixgill
“You can’t get good help nowadays.” The adage is true for many professions, but exceedingly so for cybersecurity. While cyber-attacks continue to grow in quantity and sophistication each year, most organizations are ill-prepared to defend themselves. That’s primarily because they lack the skilled professionals needed to handle the consistent rush of expanding threats.
Many approaches to closing the skills gap focus on hiring, educating, and training more people – all sound ideas, but the number of open cybersecurity positions waiting to be filled is in the millions worldwide. It will take many years of training and on-the-job experience for humans alone to shore up defenses adequately. In the meantime, generative AI can serve as a highly valuable asset – now and soon- as its power and potential are better understood and applied.
We’ll look at how generative AI – abbreviated to AI for convenience here yet distinct from other forms of artificial intelligence – can help organizations in the short, medium, and long term.
Taking stock of reality
A few statistics quickly paint the picture:
- US$8 trillion — the estimated total damage inflicted by cybercriminals in 2023, up from US$3 trillion in 2015[1]
- 3.5 million – the number of unfilled cybersecurity jobs globally in 2023[2]
- 2.2 – the average cybersecurity maturity level of organizations globally on a 1-5 scale. Translation: Most organizations are reactive and panic when attacked rather than being proactive and preemptive.[3]
- 72% — the number of IT and cybersecurity professionals who say they use spreadsheets to track and manage security hygiene efforts[4]
Additionally, publicly traded companies now face increased scrutiny by the U.S. Securities and Exchange Commission on how they manage and implement cybersecurity protections. Boards of directors are now legally responsible for ensuring adequate efforts are being made to safeguard the corporations they oversee and, in turn, all of a company’s customers and other stakeholders[5].
Short-term solutions: Help for junior-level security team members (senior level, too)
One other interesting statistic before moving on to solutions: zero percent. That’s the unemployment rate for mid- and senior-level cybersecurity professionals, a figure that’s stayed constant since 2016.[6] Those who know what to do in the face of attacks are already gainfully employed. Those taking unfilled positions are relatively new to their jobs and need guidance that their seasoned peers typically don’t have the time to provide.
Enter AI.
As anyone who has played around with ChatGPT knows, just a few simple prompts can quickly return information that would take skilled and extensive research otherwise. Using large language models, generative AI tools can cull through diverse sources and provide answers that significantly elevate a user’s understanding of a subject almost immediately. If the first response isn’t sufficient, subsequent queries can explain the topic more thoroughly or suggest other directions to head.
While ChatGPT is excellent for answering questions regarding general interest issues, it’s not prepared to be a full-blown cybersecurity aid. Its data sources do not include the particular ones that contain both the relevant and up-to-date information pertinent to keeping an organization protected from attacks.
Properly implemented, cybersecurity-specific AI tools can be a godsend for an inexperienced junior cybersecurity person. Rather than taking up their senior colleagues’ time and potentially asking questions they’re embarrassed to pose, junior staff members can use AI as a non-judging helpmate and, in turn, become more valuable to their organization’s cybersecurity efforts (and in the process, also look better in the eyes of their peers and managers).
For more advanced cybersecurity pros, asking questions or giving commands in natural language expedites the analysis process. For example, “Tell me about this CVE” can result in a simple and concise answer that gets to the heart of the issue without having to weed through numerous sources manually. From there, the senior person can take the next step to protect the organization.
While a handful of solutions are on the marketplace, there’s room for further refinement. If you want the AI application to answer intelligence questions about activity in ransomware sites, it needs access to data from ransomware sites. If you want the latest on initial access broker (IAB) markets, the AI must have access to IAB market intelligence. And if you want the AI to answer intel questions about threat actors sharing exploit codes in the underground, you need it to have access to that vulnerability intelligence.
This also becomes a matter of efficiency and relevance. Cybersecurity teams only need to know about threats that could affect their organizations. Accordingly, AI tools work best when they provide organizational context. Due to the sensitive nature of customer assets, these tools need to be constructed to provide the highest levels of security along with their top-tier AI power.
Mid-term solutions: Organizations develop their own AI tools
The promise of open-source AI solutions is that a company can go even further in zeroing in on the data it needs and use its own accumulated history of cybersecurity threats and responses, as well as the types of data relevant to the types of issues it faces. In this way, the AI tool carries the “institutional knowledge” traditionally assumed to be in the heads of people who had worked in an organization for their entire careers. The AI learns about external threats, internal best practices, and priority intelligence requirements (PIRs).
Furthermore, many enterprises will prefer creating homegrown solutions for data security and privacy reasons. It’s hard to trust AI when its creators are sitting somewhere in Silicon Valley, with no alliance with your organization and potentially using your data to train the next version of their own AI solution.
We should emphasize that using open-source models is far from a simple task. You need highly specialized (and highly paid) employees – data scientists, data engineers, and the like – to set up, monitor, and continually feed and support an AI tool worthy of the role.
But such a tool can be even more valuable if those experts include predictive functions. Just as Amazon and other advanced online marketing companies can make buying suggestions to people ordering an item by looking at what other buyers of that item have done, an AI tool could recommend how to respond to a specific threat based on previous experience.
We expect that the tools that fulfill the requirements of this mid-term stage will be neither generic, off-the-shelf models like ChatGPT nor completely DIY open-source packages. Rather the tool will be designed for cybersecurity-specific use as soon as it’s installed while allowing the security team to fine-tune it for the organization’s unique contextual challenges.
Long-term solutions: AI becomes an autonomous agent
Artificial intelligence, in general, has been widely implemented to do the tasks that humans would rather not do. The same is likely to apply to AI use for cybersecurity matters.
In the cybersecurity sphere, AI-powered autonomous agents will be valuable in removing both the drudgery and overload that security teams sometimes face. This might include an initial triage of alerts, responses to particular events, and any other function in which an autonomous response is appropriate.
Such implementations are not likely to take jobs away from humans. As the threat environment becomes more complex and challenging, more work will need to be done. No system is completely hands-off, given the nature of threats and the data sensitivity to be protected, such as personal information. Consequently, security professionals will always be at the critical intersection of threat and response, making decisions beyond what AI can be expected to do.
How CISOs can benefit from AI
There’s no way to accurately predict how quickly we’ll reach the middle-term level of AI tools geared specifically to one organization or the long-term level of autonomous agents. But this field is accelerating at breakneck speed as investors and savvy technical people see the opportunities inherent in generative AI. Don’t be shocked if the long-term level arrives within the next 18 to 24 months.
In the meantime, CISOs would be wise to cast about AI tools that can boost their team’s readiness by making their junior-level people more capable and their senior-level people even more efficient. At Cybersixgill, we’re already seeing organizations adopt our AI-driven Cybersixgill IQ at all levels: accelerating their processes, improving workflows, and making their teams up to 10 times more effective as they do so.
It’s also not too early to learn more about how generative AI works and how organizations could benefit from cyber-specific AI that responds specifically to their organizational context and unique attack surface. Even better, cybersecurity leaders would be advised to invest the time and effort to go well beyond a surface-level understanding of AI to take advantage of the defensive asset that it is.
As American AI researcher Eliezer Yudkowsky says, “By far, the greatest danger of artificial intelligence is that people conclude too early that they understand it.”
The author is part of the team leading the development of Cybersixgill IQ, a generative AI tool designed to make threat intelligence accessible to all members of a cybersecurity team, regardless of their level of experience.
Sources:
[1] Cybersecurity Ventures report, released December 2022
[2] Cybersecurity Ventures report, released April 2023
[3] CYE Cybersecurity Maturity Report
[4] Noetic Cyber survey, as reported by Secureframe, June 2023
[5] https://www.sec.gov/files/rules/final/2023/33-11216.pdf