Securing The Future: Cybersecurity Predictions for 2024

By Dominik Samociuk [ Join Cybersecurity Insiders ]
1476

[By Dominik Samociuk, PhD, Head of Security at Future Processing]

When more than 6 million articles of ancestry and genetic data were breached from 23 and Me’s secure database, companies were forced to confront and evaluate their own cybersecurity practices and data management. With approximately 2.39 million instances of cybercrime experienced across UK businesses last year, the time to act is now.

If even the most secure and unsuspecting businesses aren’t protected, then every business should consider themselves, and operate as a target. As we roll into 2024, it is unlikely there will be a reduction in cases like these. It is expected there will be an uptick in the methods and levels of sophistication employed by hackers to obtain sensitive data – something that continues to increase as a high-ticket commodity.

In the next two years, it is predicted that the cost of cyber damage will grow by 15% yearly, reaching a peak of $10.5 trillion in 2025. We won’t be saying goodbye to ransomware in 2024, but rather saying hello to an evolved, automated, adaptable, and more intelligent form of it. But what else is expected to take the security industry by storm in 2024?

Offensive vs. Defensive Use of AI in Cybersecurity

Cybersecurity is a symbiotic cycle for companies. From attack to defence, an organisation’s security experts must be constantly defensive against malicious attacks. In 2024, there will be a rise in the use of Generative AI with an alarming 70% of workers using ChatGPT not making their employers aware – opening the door for significant security issues, especially for outsourced tasks like coding. And while its uses are groundbreaking, Gen AI’s misuses, especially when it comes to cybersecurity, are cause for concern.

Cybersecurity breaches will come from more sophisticated sources this year. As artificial intelligence (AI) continues to surpass development expectations, systems that can analyse and replicate humans are now being employed. With platforms like LOVO AI, and Deepgram making their way into mainstream use – often for hoax or ruse purposes – sinister uses of these platforms are being employed by cybercriminals to trick unsuspecting victims into disclosing sensitive network information from their business or place of work.

Cybercriminals target the weakest part of any security operation – the people – by encouraging them to divulge personal and sensitive information that might be used to breach internal cybersecurity. Further, Generative AI platforms like ChatGPT can  be used to automate the production of malicious code introduced internally or externally to the network. On the other hand, AI is being used to strengthen cybersecurity in unlikely ways. Emulating a cinematic cyber-future, AI can be used for the detection of malware and abnormal system/ or user activity to alert human operators. It can then equip staff with the tools and resources needed to respond in these instances.

Fatally, like any revolutionary platform, AI produces hazards and opportunities for misuse and exploitation. Seeing a rise in alarming cases of abuse, cybersecurity experts must consider the effect these might have before moving forward with an adaptable strategy for the year.

Data Privacy, Passkeys, and Targeting Small Businesses

Cybercriminals using their expertise to target small businesses is expected to increase in 2024. By nature, small businesses are unlikely to operate at a level able to employ the resources needed to combat consistent cybersecurity threats that larger organisations face on a daily basis. Therefore, with areas of cybersecurity unaccounted for, cybercriminals are likely to increasingly exploit vulnerabilities within small business networks.

They may also exploit the embarrassment felt by small business owners on occasions like these. If their data is being held for ransom, a small business owner, without the legal resources needed to fight (or tidy up) a data breach is more likely to give in to the demands of an attacker to save face, often setting them back thousands of pounds. Regular custom, loyalty, trust, and reputation makes or breaks a small business. Even the smallest data breaches can, in one fell swoop, lay waste to all of these.

Unlikely to have dedicated cybersecurity teams in place, a small business will often employ less secure and inexpensive data management solutions – making them prime targets. Contrary to expectations, in 2024, we will not say goodbye to the employment of ransomware. In fact, these tools are likely to become more common for larger, well-insured companies due to gold-rush on data harvesting.

Additionally, changing passwords will become a thing of the past. With companies like Apple beta-testing passkeys in consumer devices and even Google describing them as ‘the beginning of the end of the password’, businesses will no doubt begin to adopt this more secure technology, stored on local devices, for any systems that hold sensitive data. Using passwordless forms of identification mitigates issues associated with cyber criminals’ common method of exploiting personal information for unauthorised access.

Generative AI’s Impact on Information Warfare and Elections

In 2024, more than sixty countries will see an election take place, and as politics barrel towards all-out war in many, it is more important than ever to safeguard cybersecurity to account for a tighter grip on fact-checked information and official government communications. It is likely that we will see a steep rise in Generative AI supported propaganda on social media.

In 2016, amidst the heat of a combative and unfriendly US Presidential election, republican candidate Donald Trump popularised the term ‘Fake News’, which eight years later continues to plague realms of the internet in relation to ongoing global events. It was estimated that 25% of tweets sampled during this time, related to the election, contained links to intentionally misleading or false news stories in an attempt to further a viewpoint’s popularity. Online trust comes hand-in-hand with security, without one the other cannot exist.

While in 2016, the contemporary use of AI was extremely limited in today’s terms, what becomes of striking concern is the access members of the public have to platforms where, at will, they can legitimise a controversial viewpoint, or ‘fake news’ by generating video or audio clips of political figures, or quotes and news articles with a simple request. The ability to generate convincing text and media can significantly influence public opinion and sway electoral processes, destabilising a country’s internal and external cybersecurity.

Of greatest concern is the unsuspecting public’s inability to identify news generated by AI. Cornell University found that people were tricked into finding new false articles generated by AI credible over two-thirds of the time. Further studies found that humans were unable to identify articles written by ChatGPT beyond a level of random chance. As Generative AI’s sophistication increases, it will become ever more difficult to identify what information is genuine and safeguard online security. This is critical as Generative AI can now be used as ammunition in information warfare through the spread of hateful, controversial, and false propaganda during election periods.

In conclusion, 2024, like 2023, will see a great shift in focus toward internal security. A network is at its most vulnerable when the people who run it aren’t aligned in their strategies and values. Advanced technologies, like AI and ransomware, will continue to be a rising issue for the industry, and not only destabilise networks externally, but internally, too, as employees are unaware of the effects using such platforms might have.

Ad

No posts to display