Katie Barnett, Director of Cyber Security, Toro Solutions discusses why trusting AI requires not just tech controls, but culture, shared accountability and ethical leadership.
AI isn’t new. It’s been quietly powering everyday technologies for years from facial recognition unlocking your phone, to the traffic alerts that guide your route on Google Maps.
Within organisations, tools like Grammarly, scheduling assistants, voice-to-text meeting apps and customer service bots all rely on AI.
Developers use co-pilots to write code, sales teams use AI to draft emails and marketing teams are AI-generating campaign ideas.
What’s changed is the speed and scale at which AI is being integrated into business operations.
Today, AI is embedded across organisations, shaping workflows, informing decisions and increasingly forming the backbone of core infrastructure.
For many companies, AI is driving efficiency gains and transforming how work gets done in almost every industry.
Over the past 18 months, the growing use of AI in business has been matched by daily headlines highlighting the risks:
If you’re a business leader reading those headlines, you’re likely asking:
What does this mean for me and for my organisation?
The first essential task is visibility – gaining a clear understanding of what AI tools different teams are currently using and what tools they would like to leverage.
This means managing AI adoption both from a business perspective (knowing how teams apply AI to their workflows) and from an IT perspective.
Without this visibility, organisations risk blind spots that lead to uncontrolled risks and missed opportunities for effective governance and support.
In my work with security and risk leaders across sectors, I’ve witnessed the consequences of this misalignment.
AI systems are being deployed at speed, but often without clear accountability, adequate controls or visibility into how risks are managed at scale.
When this happens AI shifts from being a strategic asset to becoming a source of operational, legal and reputational vulnerability.
AI can boost productivity, streamline decisions and reshape industries but only if it’s trusted.
And trust doesn’t begin with technology; it begins with people.
The individuals who design, deploy and oversee AI systems play a central role in shaping how these systems are used and perceived within your organisation.
In my conversations with industry leaders, one theme continues to surface – culture is key.
Tools and frameworks are important, but they’re not enough.
Lasting trust is built through organisational culture through clear values, shared accountability and a commitment to doing the right thing with AI.
Establishing trust demands shared ownership across operations, legal, compliance and security teams.
Leading organisations embed governance into every stage of the AI lifecycle, treating security not as an afterthought but as integral to AI deployment.
The old stereotype of the security team being the “department of no” doesn’t fit anymore.
In an AI-native organisation, security teams must become educators, enablers and policy makers.
AI innovation thrives on speed, but that speed cannot come at the expense of visibility and control.
The security function now has a dual mandate, to empower safe experimentation while maintaining rigorous oversight.
This means embedding secure design principles into model development, ensuring sensitive data is protected at every stage and creating policies that developers understand and follow.
It also means investing in internal training, awareness and alignment across teams.
From our experience conducting AI security reviews I’ve seen consistent blind spots across sectors.
Organisations are frequently using public large language models without proper data controls or privacy assessments.
There’s often no clear documentation of which models are live and what data they handle and there is a widespread lack of clarity about who owns AI governance and risk mitigation.
These gaps are rarely the result of negligence or poor intent. More often, they come from enthusiasm running ahead of solid structure.
But even if unintentional, they can have serious consequences.
AI systems often process proprietary or sensitive data, whether it’s customer data, source code or financial models.
Without appropriate controls, this data can be retained by third-party systems, used to train external large language models or inadvertently exposed to other users.
This isn’t just a compliance headache, it carries real legal risks and can cause lasting reputational damage.
As AI becomes more embedded in business operations, I believe it’s critical for all organisations to undergo a structured AI security review and ask themselves these key questions:
If your organisation is exploring or deploying AI, here are five immediate actions I’d recommend doing now:
This article was originally published in the September edition of Security Journal UK. To read your FREE digital edition, click here.
Click to Open Code Editor