Welcome to our

Cyber Security News Aggregator

.

Cyber Tzar

provide a

cyber security risk management

platform; including automated penetration tests and risk assesments culminating in a "cyber risk score" out of 1,000, just like a credit score.

Are we building AI on a secure and trusting foundation?

published on 2025-09-26 09:47:18 UTC by Millie Marshall Loughran
Content:

Katie Barnett, Director of Cyber Security, Toro Solutions discusses why trusting AI requires not just tech controls, but culture, shared accountability and ethical leadership.

AI: Scaling to a new level

AI isn’t new. It’s been quietly powering everyday technologies for years from facial recognition unlocking your phone, to the traffic alerts that guide your route on Google Maps.

Within organisations, tools like Grammarly, scheduling assistants, voice-to-text meeting apps and customer service bots all rely on AI.

Developers use co-pilots to write code, sales teams use AI to draft emails and marketing teams are AI-generating campaign ideas.

What’s changed is the speed and scale at which AI is being integrated into business operations.

Today, AI is embedded across organisations, shaping workflows, informing decisions and increasingly forming the backbone of core infrastructure.

For many companies, AI is driving efficiency gains and transforming how work gets done in almost every industry.

Over the past 18 months, the growing use of AI in business has been matched by daily headlines highlighting the risks:

  • A coding tool deleted an entire company database and replaced it with fake data
  • Researchers showed how large language models can carry out attacks autonomously
  • Businesses are unknowingly exposing sensitive data to public AI systems

If you’re a business leader reading those headlines, you’re likely asking:

What does this mean for me and for my organisation?

Visibility: The first line of defence

The first essential task is visibility – gaining a clear understanding of what AI tools different teams are currently using and what tools they would like to leverage.

This means managing AI adoption both from a business perspective (knowing how teams apply AI to their workflows) and from an IT perspective.

Without this visibility, organisations risk blind spots that lead to uncontrolled risks and missed opportunities for effective governance and support.

In my work with security and risk leaders across sectors, I’ve witnessed the consequences of this misalignment.

AI systems are being deployed at speed, but often without clear accountability, adequate controls or visibility into how risks are managed at scale.

When this happens AI shifts from being a strategic asset to becoming a source of operational, legal and reputational vulnerability.

Trust in AI starts with people

AI can boost productivity, streamline decisions and reshape industries but only if it’s trusted.

And trust doesn’t begin with technology; it begins with people.

The individuals who design, deploy and oversee AI systems play a central role in shaping how these systems are used and perceived within your organisation.

One question I’m frequently asked is: How do we build trust in AI across our organisation?

In my conversations with industry leaders, one theme continues to surface – culture is key.

Tools and frameworks are important, but they’re not enough.

Lasting trust is built through organisational culture through clear values, shared accountability and a commitment to doing the right thing with AI.

Establishing trust demands shared ownership across operations, legal, compliance and security teams.

Leading organisations embed governance into every stage of the AI lifecycle, treating security not as an afterthought but as integral to AI deployment.

The changing role of security teams

The old stereotype of the security team being the “department of no” doesn’t fit anymore.

In an AI-native organisation, security teams must become educators, enablers and policy makers.  

AI innovation thrives on speed, but that speed cannot come at the expense of visibility and control.

The security function now has a dual mandate, to empower safe experimentation while maintaining rigorous oversight.

This means embedding secure design principles into model development, ensuring sensitive data is protected at every stage and creating policies that developers understand and follow.

It also means investing in internal training, awareness and alignment across teams.

The risks are growing and they are often hidden

From our experience conducting AI security reviews I’ve seen consistent blind spots across sectors.

Organisations are frequently using public large language models without proper data controls or privacy assessments.

There’s often no clear documentation of which models are live and what data they handle and there is a widespread lack of clarity about who owns AI governance and risk mitigation.

These gaps are rarely the result of negligence or poor intent. More often, they come from enthusiasm running ahead of solid structure.

But even if unintentional, they can have serious consequences.

AI systems often process proprietary or sensitive data, whether it’s customer data, source code or financial models.

Without appropriate controls, this data can be retained by third-party systems, used to train external large language models or inadvertently exposed to other users.

This isn’t just a compliance headache, it carries real legal risks and can cause lasting reputational damage.

What can you do

As AI becomes more embedded in business operations, I believe it’s critical for all organisations to undergo a structured AI security review and ask themselves these key questions:

  • What are the AI use cases, connected systems and data accessed?
  • Who owns, manages and is accountable for AI risks?
  • Where does data go? Are privacy regulations respected?
  • Are technical and organisational controls (like access restrictions, encryption, audit logs) sufficient?
  • What’s the risk posture, where are the gaps and how should they be addressed?

Building a secure AI foundation – what you can do now

If your organisation is exploring or deploying AI, here are five immediate actions I’d recommend doing now:

  • Audit your AI landscape: Map where AI is already in use. Uncover shadow deployments and assess their risk levels
  • Define internal AI usage policies: Create guidelines that clarify what tools can be used, how they can be used and what data is off-limits
  • Engage key stakeholders early: Include legal, compliance, HR and security in all AI initiatives. This builds shared accountability and reduces siloed decision-making
  • Run a formal AI security review: Use a structured methodology tailored to your business, ideally mapped to frameworks like ISO 27001 (or ISO 42001 the AI management standard), NIST or the EU AI Act
  • Invest in training and culture: Provide ongoing education to ensure teams understand the risks, use AI responsibly and know when to ask for help

This article was originally published in the September edition of Security Journal UK. To read your FREE digital edition, click here.

Article: Are we building AI on a secure and trusting foundation? - published 10 days ago.

https://securityjournaluk.com/building-ai-secure-trusting-foundation/   
Published: 2025 09 26 09:47:18
Received: 2025 09 27 03:21:11
Feed: Security Journal UK
Source: Security Journal UK
Category: Security
Topic: Security
Views: 11

Custom HTML Block

Click to Open Code Editor