By understanding the nuances and potential pitfalls of Artificial Intelligence (AI) and Deep Learning (DL) security leaders can fortify their defences and stay ahead of the ever-evolving threat landscape, writes Philip Ingram MBE.
AI and DL algorithms possess the remarkable ability to analyse vast volumes of data, detecting patterns and anomalies that human analysts may overlook.
By leveraging predictive intelligence and natural language processing, these systems can proactively identify potential threats.
Many commentators talk only of AI and ML in a cyber context, however, given the digitisation of all security data they enable organisations to mitigate risks before they manifest into full-scale attacks, whether cyber or physical.
Many of these patterns and anomalies are because of enhanced threat intelligence and behavioural analysis.
Generative AI models can provide security analysts with deeper insights into the nature and behaviour of emerging cyber threats.
By automatically scanning code, network traffic and other data sources, these algorithms can generate rich, contextual information that facilitates a more comprehensive understanding of the threat landscape.
In the world of physical security AI enhanced behavioural analysis can identify potential threat actors operating in different environments.
The UK British Transport Police for example, has used enhanced techniques to successfully identify potential suicide threats on the rail network.
Given the changing security landscape the complexities are beyond pure human analysis because of the amount of data available.
AI and DL algorithms can scan infrastructure, code, and configurations to uncover weaknesses, data, behaviour and more to discover patterns or suspicious behaviour enabling security teams to be able to focus human oversight on areas of trigger rather than try and take it all in.
This allows them to prioritise remediation efforts based on the likelihood and potential impact of attacks.
By monitoring behavioural and other triggers in the physical world. systems can alert SOCs and both help mitigate the impact of security incidents and facilitate rapid incident response.
This synergy between artificial intelligence and a balanced and proportionate human response, if necessary, can significantly enhance the overall security posture of an organisation reducing manpower and therefore costs.
By automating repetitive tasks and providing actionable insights, AI and DL can augment the capabilities of security professionals, empowering them to focus on strategic initiatives and higher-level threat priorities.
However, it is not just automating repetitive tasks, it is being able to analyse new data rapidly and compare it to wider data signatures to predict what could happen is significant.
AI extraction of drone acoustic signatures from background noise around prisons or Critical National Infrastructure sites, for example, gives another layer of relatively inexpensive detection for an emerging threat vector.
There are many more potential uses.
As AI and DL have developed, so too has our understanding of the associated challenges.
The reliance of AI systems on large datasets raises concerns about data privacy and governance.
Also, AI and DL models are not infallible, and the potential for false positives and negatives can lead to wasted efforts or overlooked threats.
Rigorous data preparation, model testing, and continuous monitoring are crucial to maintain the reliability and accuracy of these systems.
However, using them to inform a human in the loop where appropriate rather than replace all human interaction mitigates much of this.
The black-box nature of many AI and DL models can make it challenging for security and risk professionals to understand the reasoning behind their decisions.
This lack of transparency can undermine trust in the systems and hinder the ability to fine-tune and improve their performance.
This lack of transparency and associated distrust is enhanced when reports of bias or false positives come out. Biases inherent in the training data or the algorithms themselves can lead to flawed threat detection and risk assessment.
AI and DL solution developers must address these biases through comprehensive data curation, diverse model development and continuous monitoring.
Security professionals must be confident that the developers have addressed and if necessary, quantified any potential issues.
However, what is critical with any new technological capability is to understand how its use fits, complements and enhances an organisations’ broader goals and strategies or a corporate entities business goals and strategies.
Too often new technologies are rushed in as a potential game changing panacea without working out how in a wider business context.
One thing is common across AI and DL solutions is that data is at the core and ensuring robust data governance, including risk and security understandings are at the core of related decision making.
As the security landscape continues to evolve, the role of AI and DL will become increasingly pivotal.
Generative AI models, for instance, will likely play a more prominent role in automating the creation of security policies, incident response playbooks, and vulnerability remediation guidance.
Security feeds will benefit from more intelligent analysis of larger and larger volumes of data, but potentially enable manpower reductions but not removing the human from the loop totally.
Advancements in natural language processing and computer vision will enhance the ability to detect and mitigate social engineering attacks, including deepfakes and phishing attempts.
This article was originally published in the August Edition of Security Journal UK. To read your FREE digital edition, click here.
Click to Open Code Editor