David Vigor, Director of End User Business, HID explores how the unknown is the biggest risk for security.
In the world of security, facts should drive decisions. Yet time and again, organisations prioritise what feels safe over what is safe.
Through my ongoing work and conversations with customers across multiple sectors, a clear pattern has emerged: security strategies are often shaped more by perception than by objective risk.
There are very real pitfalls in how organisations perceive and respond to risk.
The decisions we make are frequently driven by emotion, habit or collective norms – not necessarily by what the data tells us.
In this article, I explore the most common psychological and organisational traps that distort risk perception – from emotional bias and short-term thinking to herd mentality and flawed assessments of likelihood versus impact – and how recognising this disconnect is the first step towards stronger, more resilient security strategies.
Security choices are often made under pressure and emotional instincts can outweigh data-driven reasoning.
We frequently hear about organisations investing in highly visible security measures – such as CCTV cameras or complex password requirements – because they feel secure, while more critical areas like credential lifecycle management or staff training remain under-prioritised.
This reflects what behavioural scientists call “affect heuristics” – mental shortcuts driven by emotion.
Measures that offer immediate reassurance tend to win attention, even when quieter, background systems may offer far greater risk mitigation.
It’s a reminder that security isn’t just about what we can see – it’s about addressing what’s most vulnerable.
Another pattern that regularly comes up in our discussions is the tendency to delay investment in long-term security.
Known as hyperbolic discounting, this cognitive bias causes us to undervalue future risks in favour of immediate demands.
Outdated technologies often remain in place because replacing them feels expensive, complex or unnecessary in the moment.
For example, many organisations still rely on 125-kHz proximity cards – despite their known vulnerabilities – simply because they haven’t experienced a breach yet.
This “wait until it breaks” mindset is common, but it leaves systems exposed.
Like many long-term threats, security vulnerabilities tend to develop slowly – until they erupt all at once.
Security decisions rarely happen in isolation. One of the more striking insights from customer focus groups is how often entire industries share the same habits – including the same blind spots.
The influence of groupthink is strong: organisations take comfort in sticking with familiar systems because “everyone else is doing the same.”
This results in slow adoption of more secure innovations, like cloud-based or mobile access control solutions, even when the benefits are clear.
History shows how dangerous this herd mentality can be – from the 2008 financial crisis to repeated cybersecurity lapses across sectors.
Effective risk management demands that we challenge consensus when it no longer serves us.
Another common mistake we hear about is conflating likelihood with impact.
Frequent, low-level risks – like tailgating through entry points – tend to receive more attention than rare but catastrophic threats, such as credential cloning or coordinated access attacks.
It’s understandable. Rare events are hard to imagine – and easy to downplay.
But when a low-probability event carries serious consequences, dismissing it outright is a gamble.
As one security manager told us, “If it happens once, it’s already too late.”
Strong risk strategies account for both frequency and severity – not just what’s likely, but what would be devastating if ignored.
We also see how perceptions of value influence risk decisions – sometimes in ways that increase exposure.
When new technologies offer greater functionality, they’re often adopted quickly, even if they come with added security concerns.
Take smartphones, for instance. When they first entered the workplace, they lacked the robust protections of older corporate devices – but the perceived value won out.
Conversely, we now see secure tools like mobile access being resisted, not because they’re flawed, but because the perceived cost of change outweighs their long-term benefit.
Risk evaluation must be grounded in reality, not just habit or convenience.
Historical examples remind us how easy it is to misjudge risk. The Titanic was labelled “unsinkable” – until real-world factors proved otherwise.
In contrast, the Y2K bug was widely dismissed as an overreaction, even though careful preparation likely prevented catastrophe.
What these cases show is that successful mitigation can be invisible – which sometimes leads to complacency. No news doesn’t always mean no risk.
At its heart, the challenge isn’t just identifying threats – it’s thinking clearly about which ones matter most.
Emotional biases, time distortions, collective habits and skewed cost-benefit judgments all affect how we evaluate security.
But by bringing greater awareness to these influences, organisations can begin to close the gap between perception and reality – and design systems that protect against the right risks, not just the most visible ones.
The future of access control doesn’t lie in guesswork or gut feeling. It lies in building partnerships grounded in insight, reflection and experience.
Whether you’re updating legacy systems, rethinking mobile access or navigating sector-wide change, the right support should be built not on assumption, but on a clear understanding of how risk really works.
This article was originally published in the May edition of Security Journal UK. To read your FREE digital edition, click here.
Click to Open Code Editor