
Kyocera CISO Andrew Smith explains how he’s responded to the cyber risks associated with AI and how businesses can start implementing it.
Ever since AI’s meteoric rise to prominence following the release of ChatGPT in November 2022, the technology has been at the centre of international debate. For every application in healthcare, education, and workplace efficiency, reports of abuse by cybercriminals for phishing campaigns, automating attacks, and ransomware have made mainstream news.
Regardless of whether individuals and businesses like it, AI isn’t going anywhere. That’s why, in my view, it’s time to start getting real about the use cases for the technology, even if it might lead to potential cyber risks. Companies that refuse to adapt are risking being left behind in the same manner that stubborn businesses were when they refused to adjust during the early days of the Dot-com boom.
When it comes to early adoption, everyone wants to be Apple; nobody wants to be Pan-Am. So, how do businesses adapt to the new world of AI and tackle the associated risks?
Step 1: Understand the legal boundaries of AI and identify if it’s right for your business
Despite the risks, the mass commercialization of AI is a positive development as it means legal conditions are in place to help govern its use. AI has been around for a lot longer than ChatGPT; it’s just that we’re only now starting to set guidelines on how to implement and use it.
Regulations are constantly changing given the rapid evolution of AI, so it’s essential that businesses are aware of the rules which apply to their sector. Consultation with legal professionals is as crucial as any step of the process; you don’t want to commit a large amount of capital towards a project which falls foul of the law.
Once you’ve got the all-clear to proceed – hopefully with some additional understanding of the legal parameters – it’s down to you to identify if and where AI can add value to your business and how it could affect your approach to cybersecurity. Are there thousands of hours being spent on mundane tasks? Could a chatbot speed up the customer service process? How will you keep sensitive data safe after the introduction of AI software?
What’s important is that businesses have taken the time to identify where AI could add value and not just include it in digital transformation plans because they think it’s the right thing to do. Fail to prepare, prepare to fail – and avoid embarking on vanity projects that could do more harm than good.
Step 2: Decide on your AI transformation partner
This doesn’t mean you start using ChatGPT to run your business!
Assuming you don’t already have the talent in-house, there are hundreds, if not thousands, of AI transformation businesses for you to partner with on your journey.
I won’t labour over this step as every business will have its procurement processes. Still, my best advice is to look at the case studies of an AI transformation company’s existing work and even reach out to their existing clients to find out if their new AI tools have been helpful. Crucially, make a note of any security issues encountered in AI projects and bear this knowledge in mind. Like anything, a third-party endorsement for impactful work goes a long way.
That said, with the rapid growth in AI, sometimes “case studies’ are not freely available, and businesses should consider not discounting skilled firms. Instead, if a company has the credentials, insight, and technology, allow them the ability to demonstrate capabilities and how these support your journey.
Step 3: Ensure cyber-hygiene and cyber-education are communicated across the business
Unfortunately, most cyber-attacks are caused or enabled by insiders, usually employees. In the vast majority of cases, it’s not malicious; it’s just a member of your team who doesn’t understand the implications of cyber risks and doesn’t take all the necessary precautions.
Therefore, your best opportunity to nullify those risks is by thoroughly and consistently educating your employees. This should apply just as much to new AI tools as to anything else at the business.
It seems obvious to most by now, but ChatGPT is free because we are the product. Every time you input data into the model, it learns from your input, and there’s a distinct possibility that your data will be regurgitated at some stage to someone else. That’s why staff must be careful about entering sensitive information, even if an AI tool claims to keep data secure.
Not inputting sensitive company data into (Large Language Models) LLMs might be an easy and obvious starting point, but there’s plenty more that companies should be educating their employees about cyber-hygiene and not just its relevance to AI. Key topics can include:
- Best practices in handling sensitive company data
- The right way to communicate and flag potential breaches
- Implementing an incident/rapid response plan
- Regularly backing up data and ensuring it is secure
- Secure by design – “Doing the thinking up front”
I believe education and training remain the best tools for tackling cybercrime, and failing that; you should ensure you have a solid plan to ensure that criminals can’t hold you to ransom should the worst happen.
Step 4: Implementation and regular review
If you successfully completed steps 1-3, you should have a powerful new AI tool to improve your business.
Once your staff have been trained on security risks are and using it, AI shouldn’t be treated as a ‘set and forget’ tool – any business using it should constantly review its effectiveness and make the necessary tweaks to ensure it provides maximum value the same way we do with our staff. It’s not just for efficiency either: there’s a good chance that regular reviews will expose potential vulnerabilities, and it’s far better for you to catch them before a potential cyber-criminal does.
If you skip one of the above steps, you risk encountering significant security issues and ultimately wasting capital on a failed or troublesome project. Follow each step correctly, however, and AI will become a powerful tool to help you stay ahead of the curve.