Welcome to our

Cyber Security News Aggregator

.

Cyber Tzar

provide a

cyber security risk management

platform; including automated penetration tests and risk assesments culminating in a "cyber risk score" out of 1,000, just like a credit score.

Is AI the next blind spot of risk management? 

published on 2025-05-27 08:00:00 UTC by Millie Marshall Loughran
Content:

Leon Storey, Head of AI at Solace Global Risk, discusses ways to integrate AI into risk mitigation strategies without creating new blind spots.

AI the route to mitigating cyber-threats

The speed and complexity of today’s risk landscape means there’s little room for slow decisions – or static thinking.

Geopolitical tension, cyber-threats and growing regulatory pressure has made risk management more dynamic than ever.

And in that context, it’s no surprise that artificial intelligence is seen as the obvious answer to mitigate risk, while maximising resources. 

But amid the hype, there’s a tougher conversation to be had: How do we integrate AI in a way that’s genuinely useful, without introducing new blind spots in the process? 

The pressure to evolve: Why risk management needs AI 

Risk management, especially in global operations and travel security, has always required good judgement under pressure.

But increasingly, that judgement needs to be paired with speed and scale.

Risk teams are being asked to track more threats, across more regions and with more urgency.

This is combined with navigating tighter budgets and for a workforce that expects real-time support in every time-zone. 

Artificial intelligence brings real value here.

It can filter through noise, surface critical insights faster and reduce the burden of manual data wrangling.

In fast-evolving scenarios, that speed can make the difference between proactive intervention or a threat becoming a significant crisis.

But as with any digital tool, its value depends on how you use it – and how much you still trust your own instincts when AI draws its own conclusions.

Treating AI as a co-pilot, not a crystal ball 

While it’s considered the early days of artificial intelligence (If we’re comparing it to the internet, it’s the equivalent of the mid-1990’s) and as such, the Large Language Models (LLM) are evolving at rapid pace.

Despite AI being in its’ relative infancy, the technology available today offers huge potential to transform how we manage risk

For risk teams, it’s already a working tool sorting through intelligence feeds, scanning open-source data and generating alerts faster than any analyst ever could.

But speed alone isn’t the win.

The real value lies in how artificial intelligence can support decision-making when the pressure is on. 

It can highlight early warning signs of disruption, bring consistency to reporting and help standardise how intelligence is shared across departments.

Done well, it moves teams out of reactive firefighting mode and into a more strategic, proactive posture. 

This presents a paradox: Move too slowly and fall behind, but move too fast and you could introduce blind spots into already complex systems – that could have significant consequences. 

The problem is artificial intelligence is only as good as the data it sees – and the context it understands.

Which means it should be treated less like a crystal ball and more like a co-pilot.

One that’s fast, but sometimes lacking in nuance that human judgement can deliver. 

Human Insight: Still the most important tool in the kit 

When OpenAI released ChatGPT in 2022, many thought that artificial intelligence could replace a vast number of jobs, including risk management roles. 

But at the core of successful risk management is a deep understanding of the world—specifically, knowing what constitutes “normal”, and what signals something out of the ordinary that demands action.

Yes, human judgement is imperfect.

People make mistakes, miss details and suffer from decision fatigue.

Artificial intelligence doesn’t get tired or distracted, but, when LLMs are tasked with identifying what’s normal based purely on training data, that’s where the risk starts to shift.

These models learn patterns from the past, but they don’t truly understand context in the way experienced analysts do.

And when something unprecedented happens, or something falls outside those learned patterns, artificial intelligence may not flag it at all.

That’s why human oversight remains critical.

AI might accelerate how we process information, but it shouldn’t be the one making final decisions in high-stakes scenarios.

It’s not about resisting automation, it’s about designing systems where machines support, rather than replace, human judgement.

The danger isn’t just misinformation or ‘hallucinations’, but misplaced confidence in AI-generated outputs.

Without proper checks, we risk creating faster workflows that still arrive at the wrong conclusions. 

The risk of over-reliance on AI

The second part of the misjudgement lies in treating AI-generated conclusions as fact.

Our brains naturally seek the path of least resistance, especially when it comes to offloading mental labour.

And artificial intelligence makes that incredibly easy by automating cognitive effort and decision-making at speed.

But that convenience can lead to over-reliance, making us less likely to challenge what’s presented, especially when we assume AI is correct 99% of the time.

That’s why human verification remains vital.

Adding nuance, context and critical thinking to artificial intelligence output helps ensure that judgement isn’t outsourced entirely.

This can be addressed by embedding structured verification layers into workflows and combining automated cross-checks with deliberate human oversight, so scrutiny isn’t lost in the name of efficiency. 

Governance first: Building AI into your risk strategy responsibly 

Bringing artificial intelligence into your risk management practices holds real potential, but one of the most overlooked risks is how easily AI can begin learning from the information it’s given.

LLM’s are designed to enhance their performance through exposure to data, but by their nature this can become a vulnerability – particularly when sensitive company information, personal data, or intellectual property is involved. 

That’s why AI integration isn’t just a technical decision.

It’s a legal, ethical, and operational one, therefore a joined-up approach is essential.

Involving the CISO, legal, HR and compliance teams from the outset can surface risks before they’re baked into workflows.

These conversations help ensure your duty of care is properly considered.

This is especially important when AI will be interacting with traveller data, location tracking, or incident reports. 

Practically, this means asking upfront: 

  • Are we exposing sensitive data through AI tools? 
  • How is data being stored, processed, and potentially re-used by third-party models? 
  • Have we put controls in place to avoid unintentional data leakage or misuse? 

Getting clarity on these questions early avoids expensive fixes later.

It also helps build confidence across the organisation – ensuring the artificial intelligence you introduce strengthens your risk posture, rather than quietly creating new exposures. 

The pitfalls: When AI fails and what that means for security leaders 

One of the core challenges with artificial intelligence is that its outputs are only as strong as the data it’s trained on.

Most models are built on historical data and statistical patterns which means they can struggle with nuance, emerging threats, or diverse user needs that weren’t well represented in the original training set. 

Building a reliable model for risk management requires a careful balance.

On one hand, you need to protect sensitive data ringfencing proprietary information and ensuring compliance with privacy regulations.

On the other, you need to feed the model enough high-quality, relevant data to make its outputs useful and accurate. 

Bias is another risk that’s often underestimated.

If diversity considerations, particularly for employees at greater risk due to gender, ethnicity, or geopolitical context, aren’t reflected in the dataset, then neither will they be reflected in the AI’s recommendations.

And when you introduce models that rely on AI-on-AI training, such as DeepSeek’s R1, you add another layer of complexity.

These systems learn from the outputs of other models, increasing the risk of reinforcing misinformation, hallucinations and blind spots. 

This is why accuracy checks and human-in-the-loop verification are an essential part of helping artificial intelligence distinguish between fact and assumption. 

The future of risk management

The future of risk management isn’t about replacing analysts with algorithms.

It’s about freeing up time, removing noise, and enabling faster, more confident decisions – without losing the human context that risk decisions depend on. 

This means building strong governance from the start.

It means collaborating across the business and bringing in CISOs, HR, legal and compliance teams early.

It means understanding where your data comes from and putting systems in place to question, verify and improve what artificial intelligence delivers. 

The organisations that get this right won’t be the ones chasing the latest model.

They’ll be the ones asking better questions, embedding AI into workflows responsibly and building trust, in both the tech and the teams using it. 

This article was originally published in the May edition of Security Journal UK. To read your FREE digital edition, click here.

Article: Is AI the next blind spot of risk management?  - published 5 months ago.

https://securityjournaluk.com/is-ai-the-next-blind-spot-of-risk-management/   
Published: 2025 05 27 08:00:00
Received: 2025 05 27 23:09:43
Feed: Security Journal UK
Source: Security Journal UK
Category: Security
Topic: Security
Views: 18

Custom HTML Block

Click to Open Code Editor