Creating a Copilot That Doesn’t Violate Security and Compliance

By Ben Kliger, CEO and co-founder, Zenity [ Join Cybersecurity Insiders ]
5

Enterprise copilots and low code/no code capabilities are enabling business users to quickly and easily build new apps and automations throughout the enterprise, as well as process and use data at the speed of AI. Tools like Microsoft Copilot Studio take this a step further; business users can build their own copilots to drive the business forward. What’s more, business users can now build their own agents and AI apps that can act autonomously on their behalf. While this is great for productivity and efficiency, it also introduces new risks that organizations need to have a plan for as the march towards fully autonomous AI moves forward.

There are a lot of exciting new opportunities this type of technology brings. But as you’ll see below, low code/no code development can create significant risks. Security teams need to be able to keep up and establish proper guardrails to ensure that as bad actors find ways around native controls, that the enterprise’s crown jewels stay secure. They need to thread the needle, however, so as not to stifle innovation, but to keep the organization’s data secure by preventing data leaks and security back doors – and keeping AI from acting out of bounds.

New tools, new capabilities 

The proliferation of low-code and no-code platforms has revolutionized software development, enabling even those with minimal technical background to rapidly create complex applications. This democratization accelerates app deployment and reduces development costs significantly.

The ability to build copilots takes all of this activity and its possibilities a step further.  Business users can already easily build copilots to do things like read the transcript of a recorded interview, summarize it and send an email to the team. The next phase is not just building copilots and apps that act when prompted but also AI agents that can act autonomously on the user’s behalf – which is already starting. Zapier, for example, has released Zapier Central in beta form, and Salesforce’s Einstein Service Agent is applying this technology to the customer service realm. Individuals can now create a bot that acts as a personal assistant, for example.

Convenience and speed create security risks

The introduction of autonomous AI brings a new set of concerns that organizations must build guardrails around. That’s on top of some of the common challenges that come up when using low code/no code to build apps and automations: 

Overprovisioning access: If someone shares an application they’ve built with everyone at the organization, when only a select group needs it, that presents a risk. It might mean that even guest users or personal accounts in the creator’s tenant can now access that application – and the data it has access to. In the worst case, they can also misconfigure these to be shared openly to the public internet

Embedded credentials: Another common mistake is embedding a credential into an application rather than having a secure authentication method where that application should make a call to a password vault to make sure the credential is secure. So, if you hard-code a credential into the application or, you’re giving access to the username and password combo that can allow bad actors to do credential stuffing into all your different accounts across the enterprise and gain access to many things they shouldn’t have access to. 

Lack of visibility: You have no security visibility as to who is building what and what the ensuing risks are.  

With all of these factors, organizations have to carefully consider and construct a game plan for understanding things like:

  • If AI and AI apps inherently have access to corporate data (which is more or less the whole point of them), what happens when we turn it loose?
  • How would we be able to spot “bad activity” if it were occurring?
  • How do we make sure AI isn’t accessing things it shouldn’t internally?
  • How do we make sure what it returns to other apps, users and/or datasets is compliant and secure?
  • How do we ensure that business users are consistently making the right and secure choices when building these bots or agents?

A safer approach is needed

The primary reason to use AI and to enable anyone to both use and build with AI is because it can parse through data sets quickly and automatically. It can process large data sets and information far faster than humans can; and people can harness that power to drive innovation and efficiency. 

So, when you’re developing AI apps, copilots, and agents, it’s critical to consider security because, even if you’re designing a copilot or extension to do a certain task and it’s only linked with a certain data set, AI is inherently going to teach itself to gain access to more data sets. Moreso, bad actors can also target these apps to take control over them (think remote copilot execution) and not only control what data goes out to users, but can also socially engineer them to click bad links, use bad information, and a whole lot more. 

Another concern with letting business users create their own agents and bots is that they are in charge of implementing security controls like access and authentication. As a result, bots and agents are often left exposed or accessible to too many people, leaving them vulnerable to prompt injection attacks, where anyone – bad actors or unknowing insiders – can jailbreak the copilot, bot or extension into doing something it shouldn’t, resulting in data leakage via prompt injection.

To fix the problem, one approach that’s been considered is data loss prevention (DLP), but this method has been around a long time and hasn’t fixed the existing issues of data loss where end users copy/paste sensitive data to the public internet, let alone the new or forthcoming ones. It’s time to take a practical approach in implementing an AppSec approach, putting more controls in place around the things that matter most.

IT and security teams must gain more visibility into what people are building, especially AI apps, since this is now happening outside the purview of traditional IT parameters and accessing deep swaths of corporate and public data. They also need visibility into what AI itself is doing of its own volition. Robust monitoring and scanning tools are essential, as well. Teams need to erect stringent guardrails on the back end so that sensitive data doesn’t get overshared. They need to design these guardrails in such a way that they provide security but don’t hinder progress and innovation.

Securing enterprise AI

People are starting to build their own agents and copilots and soon, they’ll be able to create ones that act on their behalf at work, like a virtual assistant. As these agents and copilots act autonomously, it’s a huge lift for security to understand data, business context and logic in order to protect the enterprise from access and authentication errors, data leaks and cyber-attacks. Only then can they make sound decisions that foster innovation. IT and security teams need visibility, monitoring and controls so businesses can flourish while keeping their data safe.

 

Ad

No posts to display