
AI innovation is moving at a scale we haven’t seen before. Hyperscalers like Salesforce, Microsoft, and Google are racing to make agentic AI available to the wider public. And the appetite is there! A recent survey showed that 82% of organisations are planning to integrate AI agents in the next three years.
The autonomous nature of AI agents, however, opens organisations up to enormous ramifications for cybersecurity. Security teams are in for their ‘Great AI Awakening’ when they find out just how easily their agents can be hijacked to act in harmful ways. When this happens, the pace of AI innovation will slow to a crawl.
Is it a human or is it a machine? (What are the Cyber risks of AI agents?)
AI agents are in an awkward space straddling the line between human and machine. They can act like unpredictable humans, so can’t be treated as conventional software, but cannot be easily classified as either machine or human by identity and access management tools. This makes AI agents vulnerable to both types of cyber attacks – identity and malware.
Agentic AI behaves in non-deterministic ways, and like humans, it can be deceived. For example, a team of cybersecurity researchers tricked a popular AI assistant into extracting sensitive data from users by convincing it to adopt a ‘data pirate’ persona. Now imagine, if an AI assistant can be tricked into a ‘data pirate’ persona, why couldn’t it be trained (or rather tricked) to click on links it shouldn’t? How would it discern between phishing email from a genuine email?
Identity attacks and agentic AI are a bad combination – to put into perspective, identity attacks are the largest and fastest growing forms of cyberattack. Attackers are increasingly targeting identity because exploiting the human element requires far less effort than exploiting software vulnerabilities. Human error contributed to 68% of data breaches in 2024. Agentic AI now makes software directly vulnerable to this attack vector when it wasn’t before.
But here’s the kicker – AI agents are also designed to be more integrated and wield more power in an organisation than your traditional forms of software as they have autonomy to interact with an organisation’s systems. In cybersecurity jargon it means AI agents can be a new form of a privileged user.
Let’s take a look at how this works in practice with a software development use case—where companies like Microsoft and Salesforce are already rolling out AI agents.
Unlike traditional tools, AI agents work together like a business team. Each one has a specialized role, collaborating by assigning and completing tasks to handle complex projects efficiently.
For example, one agent might act as the designer, creating a high-level plan to identify resources, develop modules, and run them on a cloud platform. Another agent could break these steps into detailed actions. A third might focus on writing the actual code and send it to a reviewing agent, which checks for quality and suggests improvements. Finally, an integration agent would put everything together, perform testing, and approve the product for deployment.
This kind of teamwork highlights the immense impact agents can have on critical processes. They need access to an organisation’s code repositories, cloud infrastructure, development environments, task management tools, etc. If these agents are hijacked by attackers, they can become massive data leaks. With many companies still embedding credentials into code, AI Agents open a gateway to company secrets.
It’s time we treat software like humans
Companies need to resist the temptation of treating AI agents as yet another piece of software, or creating a separate identity silo for them. Instead, they should take a unified approach to identity, e.g. by managing AI agents alongside everything else—like servers, laptops, engineers, and microservices—in one comprehensive inventory. This inventory should act as the single source of truth for identity, access, policies, and real-time visibility.
By applying the same security rules to AI agents as they do to other human identities, businesses can simplify operations, cut down on complexity, and maintain consistent oversight across their entire infrastructure.
Put down the shiny toys and think of security
In the tech world we have a tendency to be mesmerized by ‘the new’ – in this instance AI Agents. As always, it’s the so-called “mean” security teams that put an end to the fun, reminding us how dangerous innovation can be when security is an afterthought. Their caution often limits how we use these exciting new tools. But this time the stakes are too high to not pay attention.
It only takes one massive, industry-altering attack to derail an emerging technology entirely—leaving new technologies to gather dust.
Unless we change how we understand AI agent identity, security teams will be spending their 2025 retrofitting current-day security models to address AI agents’ vulnerabilities. And AI innovation will come to a standstill.