Cyborg Social Engineering: Defending against personalised attacks

By Phil Robinson, Principal Security Consultant, Prism Infosec [ Join Cybersecurity Insiders ]
588

Generative AI has the potential to make social engineering attacks much more sophisticated and personalised. The technology can rapidly mine sites for information on a company, individuals, their responsibilities and specific habits to create multi-level campaigns. Through automated gethering of information, the technology can acquire photos, videos and audio recordings which can then be used to craft emails (phishing), voice attacks (vishing) and deep fake videos and images for spear phishing attacks against individuals in positions of power, for instance.

We’re already seeing evidence of such attacks in action. Back in February the Hong Kong police revealed that a finance worker at Arup, a UK engineering firm, was duped into transferring $25m when he attended a video call in which every attendee, including the CFO, was a deep fake. Similar attacks have been carried out over the WhatsApp platform, with LastPass targeted in April by calls, texts and voicemails which impersonated the company’s CEO and a senior exec at advertising firm, WPP invited to a video call in which they were asked to set up a new business by a clone of the CEO crafted from YouTube videos and voice cloning technology.

Deep fakes go wide

These are no longer isolated incidents either, with the CIO of Arup, Rob Greig, warning in his statement that the number and sophistication of deepfake scams has been rising sharply in recent months. It’s a view substantiated by The State of Information Security 2024 report from ISMS.Online which reveals that 32% of UK businesses experienced deep fake cyber security incidents, with Business Email Compromise (BEC) the most common attack type over the last 12 months. Indeed, reports suggest there was a 780% rise in deep fake attacks across Europe between 2022-23.

GenAI is a gamechanger for crafting deep fakes, because the AI enhances its own production, delivering hyper-realistic content. Physical mannerisms, movements, intonations of voice and other subtleties are processed via an AI encoding algorithm or Generative Antagonistic Network (GAN) to clone individuals. These GANS have significantly lowered the barrier to entry so that creating deepfakes today requires a much lower level of skills and resources, according to the Department for Homeland Security.

Defending against such attacks can prove challenging because users are much more susceptible to phishing which emulates another person. There are giveaways, however, with deep fake technology typically struggling to accurately capture the inside of the mouth, resulting in blurring. There may also be less movement such as blinking, or more screen flashes than you’d expect. Generally speaking, its currently easiest to fake audio, followed by photos while video is the most challenging.

Why we can’t fight fire with fire

While standalone and open source technological solutions are now available that scan and assess the possible manipulation of video, audio and text giving a reliability score as a percentage, success rates are mixed. It’s difficult to verify accuracy because few are transparent about how they arrived at the score, the dataset used or when they were last updated. They vary in approach from those trained on GANs to classifiers that can detect if a piece of content was produced with a specific tool, although even content deemed as authentically created in a piece of software can be manipulated. Many video apps, messaging and collaborative platforms already use AI with respect to filters, making detection even more problematic.

Given the current technological vacuum, the main form of mitigation today is employee security awareness, with 47% saying they are placing greater emphasis on training in the ISMS.Online survey. However, the survey notes that even well-trained employees can struggle to identify deep fakes and this is being compounded by a lack of policy enforcement; the survey found 34% were not using adequate security on their BYOD devices and 30% were not securing sensitive information. Zero trust initiatives may well help here in limiting access to such sensitive information but few organisations have mature deployments.

Deloitte makes a number of recommendations on how to mitigate the threat of deep fake attacks in its report The battle against digital manipulation. In addition to training and access controls, it advocates the implementation of a layer of verification in business processes and the clarification of verification protocols when it comes to sanctioning payments. This could be in the form of multiple layers to approve transactions, for example, from code words to token-based systems or live detection verification such as taking a “selfie” or video recording, which is already in use in the banking sector for user verification.

Policy and process

But overarching all of this we need to see a comprehensive security policy covering people, process and technology from an AI-perspective. This should seek to address AI attack detection and response, for example, so that there are channels in place for reporting a suspected Gen-AI attack or if a payment has been made. There are already a number of AI standards that can be used to help here in the governance of AI such as ISO/IEC 42001:2023 as well as the NIST AI Risk Management Framework.

Defending against deep fakes will therefore require a three-pronged approach that sees awareness training combined with security controls including access and user verification, as well as frameworks to govern how GenAI is used within the business and remediation and response. Ironically, it’s a problem that is likely to be addressed best by people and process rather than technology.

Looking to the future, some are suggesting that deep fakes could see senior execs decide to adopt a lower profile online in a bid to limit the capture of their likeness. Yet conversely there are some, such as the CEO of Zoom, who believe we will instead go to the opposite extreme and embrace the technology to create digital clones of ourselves that will then attend meetings on our behalf. These will learn from individual recordings to reproduce our mannerisms, be briefed by us, and report back with a call summary and actions. If that approach is widely adopted then detection technologies will prove to be something of a non-starter, making the primary methods of defence the verification processes and an effective AI policy.

Ad

No posts to display