James Henry, Consulting Innovation Director for Consulting, at Crossword Cybersecurity explores some of the cyber AI trends we can expect to see this year and the actions to take to thwart them.
If there is one thing 2023 reminded us, it is that cyber criminals are not only ingenious and motivated, but that even some of the ‘old’ tricks still work very well if you know what you are doing.
From acquired consumer encryption keys being used to forge security tokens, to credentials stuffing attacks to gain access to systems and the use of sophisticated malware tools such as the LockBit and Hive ransomware infrastructures, to cripple organisations, it is clear that minimising the risks is all any organisation can do.
While that reality should make every CISO uncomfortable in their chair, 2023 was also the year where everyone got ‘giddy’ over AI and that includes the cyber criminals.
In 2024, the usage, maturity and volume of these new malicious tools will increase.
We have already seen the emergence of malicious generative AI tools such as WormGPT, FraudGPT, PoisonGPT and many more examples which significantly level up cyber criminals’ capability and effectiveness to impact businesses, whilst also lowering the skill and knowledge barrier to conducting those attacks.
These large language models (LLMs) are crafted for the specific purpose of creating unique (zero day) malware, more effective phishing attacks and for creating and spreading misinformation.
Alongside the emergence of malicious Generative AI tools that cyber criminals are exploiting, there is an insider threat, as staff seek to gain business advantage using Generative AI technology without the knowledge of IT departments – what is known as shadow IT – using sensitive business and personal data in prompts or uploads to be summarised or reasoned by public LLMs.
Generative AI has hit the security industry at such speed and the rate of advancement continues apace.
As businesses race to adopt solutions for competitive advantage, privacy and security measures still need to be fully evaluated to ensure solutions are adequately secure, comply with regulation and legislation and are used in a way that does not introduce unacceptable risk.
The emergence and huge advances in text to image, text to video, and text to audio generative AI, lowers the barrier for generating convincing deep fake content, so we can expect more of this in 2024.
Indeed 2024 has already one of the highest profile musicians on the planet be the victim of a viral deep fake.
The US, UK and Europe are seeking to use legislation to ensure AI generated content is labelled (in the US case watermarked), however it is unclear how this will be achieved in practice across the internet and how tamper resistant these controls will be to prevent removal of labels and watermarking.
The Bletchley Declaration by Countries Attending the AI Safety Summit in November 2023 is a positive international step forward acknowledging the great potential that AI presents both now and in the future; whilst balancing this optimism with the importance of recognising and starting to address the associated political, personal and economic safety and security risks.
Generative AI has and will continue to raise the threat level for businesses both in the amplification of traditional cyber security attacks such as ransomware and phishing, but also because of well-intended staff improperly using public Generative AI solutions with personal and sensitive business data.
Employee cybersecurity training programmes remain an essential security control to assist us combat accidents from happening and to lessen the likelihood of deliberate targeted and opportunistic attacks from succeeding.
However, staff cyber security awareness programmes need to be updated to incorporate these new threats and communicated in a way that changes behaviours.
This must be a high priority for all organisations in 2024, particularly as these same programmes will need to highlight how the threat from phishing has become even greater, as AI makes it even more convincing.
As with any risk, a suite of controls and a layered approach to addressing them is most effective.
A blended approach of technical, personnel and procedural controls are worth evaluating.
Cyber security solution providers are racing to incorporate LLMs into their solutions – after all we have to fight fire with fire.
These enhanced solutions will help teams better understand where an organisation is vulnerable, enabling staff to interrogate increasingly complex and voluminous security logs or datasets more easily.
It will be impossible for human analyst teams to do this alone – they will simply be overrun and unable to operate at the speeds needed.
AI will help them evaluate weak points within an infrastructure and to detect and minimise the impact of incidents.
But make no mistake, AI will not replace the critical role that human analysts play in making cyber security decisions and deciding the action that be taken.
We firmly believe that cybersecurity is a partnership between the technology, cyber security teams, employees, the processes an organisation has in place and its wider supply chain.
Ensuring this collaboration and synchronicity takes place remains one of the most challenging aspects of protecting an organisation.
Generative AI, for example, excels in cyber security use cases that automate mundane activities and can help address the cyber skills gap by acting as a ‘copilot / intelligent assistant / teacher’ to augment staff skills, knowledge and experience enabling security teams to be more productive and efficient.
Unquestionably, AI is going to be both friend and foe to cyber security teams as we move forward through 2024 and beyond.
Organisations should be finding trusted sources and partners that can help them separate the ‘AI noise’ from ‘AI reality’, but more so than that, take the opportunity now to have a root and branch review of the cybersecurity technology, process and training that they have in place across their organisation.
There will undoubtedly be changes to make, and even opportunities to start exploiting AI in a way that allows cyber security professionals to focus on the challenge ahead of them, rather than firefighting their way through problems of old.
Click to Open Code Editor