AI and cybersecurity – A double-edged sword

By Gulistan Ladha, Director of Consumer Policy at GSMA [ Join Cybersecurity Insiders ]
1003

The role of AI is being discussed far and wide – from individuals wondering how AI will affect their futures,  and industries embracing it to increase productivity and efficiency, to governments wondering how to mitigate the risks of AI while maximising the benefits. As the IMF recently put it: “The rapid advance of artificial intelligence has captivated the world, causing both excitement and alarm.”

From a cybersecurity perspective, AI can be a double-edged sword – meaning it is being used to both perpetrate and protect against cyberattacks, particularly fraud.

How is AI used to perpetrate fraud?

Technologies like AI make cyberattacks more scalable and targeted, with a higher success rate. AI is increasingly used to improve malware by making it ‘smarter’ and more difficult to detect. In social engineering and impersonation, AI can generate more tailored email and SMS phishing using data scraped from social media. Voice cloning and deepfakes are becoming more common. The developments in phishing sophistication pose a great challenge for mobile operators; despite not being responsible for the content of the fraudulent messages, the provider of the mobile service is usually the first point of contact for the customer when things go wrong. Nokia’s Threat Intelligence Report finds that AI and automation are prominent elements in cyberattacks on telecoms infrastructure.

As a tool to detect threats  

AI can be used to enhance network security by leveraging its ability to analyse large datasets, detect patterns and discover anomalies in real-time. Machine learning algorithms can be trained to recognise suspicious network activity or irregular login attempts which might indicate compromised credentials. AI can flag unusual activity from a mobile device, such as large data transfers, frequent SIM card changes or multiple location shifts. It can assign fraud risk scores to users or activities, allowing operators to decide in real-time whether to flag, monitor, or block certain actions. If you use Face ID or voice authentication, you’ll be aware that these are also AI-enabled authentication and verification techniques.

How are mobile operators using AI?

To detect fraudulent traffic in their networks, operators are using advanced technologies such as – and it gets a bit technical here – rule-based filters, pattern based fraud case detection, time series analysis, outlier detection and supervised learning. These tools and expertise can continuously identify and stop multiple types of fraud in near real-time. Proprietary voice assistants such as Alexa (Amazon), Siri (Apple), Google Assistant (Google) and Bixby (Samsung) are also being integrated with large language models for secure mobile banking to ensure optimised performance, while meeting stringent regulatory requirements.

The GSMA Intelligence report series  Telco AI: State of the Market, Q2 2024 takes a look at how various operators are innovating and putting AI to good use, and what they and supporting ecosystem players need to be thinking about as they continue their AI journeys.

GSMA AI Maturity Roadmap

Last month the GSMA launched the first industry-wide Responsible AI (RAI) Maturity Roadmap which assists telecoms organisations in adopting and measuring responsible approaches to AI. It was developed following the commitments of mobile network operators to integrate AI into their work and expands on established best-practice principles, including security, privacy and safety.

How to build the right safeguards and protections

Governments are taking different approaches on AI oversight, with some adapting existing legal frameworks and others developing new AI-centric strategies. AI is a policy priority for many governments across the world in both developed markets in emerging economies  who also recognise the importance of nurturing AI development.

So what are some of the considerations that should be given to building a secure AI going forward?

Work with the private sector – governments could facilitate and fund R&D. Investment in AI infrastructure and upskilling across public and private sectors will drive technological innovation, improve efficiency and tackle fraud. A win-win for both sectors.

Establish clear principles and safeguards – establishing clear safeguards based on internationally recognised and agreed principles can provide consistency and a regulatory environment that encourages responsible, ethical and secure practices.

Invest in capacity building – knowledge sharing within countries, across sectors, and across borders equips policymakers and regulators with the skills, knowledge, and tools they need to develop evidence-based AI policies.

These considerations not only enable innovation to flourish, but foster trust in AI and ultimately safeguard the safety, security and privacy of individuals and wider society.

Ad

No posts to display