Welcome to our

Cyber Security News Aggregator

.

Cyber Tzar

provide a

cyber security risk management

platform; including automated penetration tests and risk assesments culminating in a "cyber risk score" out of 1,000, just like a credit score.

Would you spot a deepfake?

published on 2024-08-28 14:12:10 UTC by Rebecca Knowles
Content:

Research suggests not all Britons are sure, says CyberArk

With the rapid advancement of artificial intelligence (AI) capabilities, AI-powered threats are evolving at an unprecedented pace. Security teams are particularly wary, with 93% of cybersecurity professionals expecting AI-powered tools to negatively impact their organisation within the next 12 months.

Deepfake generated audio

High-profile cases of malicious GenAI deepfake use are now becoming an everyday occurrence. Take UK engineering firm Arup which fell victim to a £20m scam, where deepfake generated audio of senior officers at the company were used to trick an employee into transferring funds to cybercriminals. If not addressed, there’s potential for this technology to prove immensely damaging, but business leaders are yet to recognise the extent of the danger looming on the horizon.

CyberArk Threat Landscape Report

CyberArk’s recent Threat Landscape report reveals that three quarters (75%) of security leaders feel confident in their employees’ ability to identify deepfakes of company’s leadership team – whether video, audio or otherwise. Parallel research of UK office workers, however, shows that not all employees share that confidence. A number of workers have fears around the growing use of generative AI for malicious purposes, revealing a confidence gap between security leadership and the wider organisation.

CyberArk’s study of UK workers revealed that:

  • Alarmingly, more than 1 in 3 (34%) of UK employees say they would struggle to differentiate between a real or fake phone call or email from their boss, suggesting that confidence from security leaders is misplaced.
  • Almost half (46%) of UK workers are apprehensive about their likeness being exploited in deepfakes.
  • These anxieties far exceed employees’ concerns about AI replacing their roles, with only 37% fearing that possibility.

Urgent need for employee education and vigilance

The research highlights that while the majority of UK office workers are confident in their ability to spot a deepfake, the sizeable chunk that might be exploited represent a huge potential security weakness. Clearly, there is an urgent need for employee education and vigilance, as well as for proactive tools to combat the escalating risks posed by AI-powered attack and in particular the identity security threat that deepfakes pose.

“The increasing sophistication of AI-generated deepfakes is blurring the lines between reality and deception, adding a new layer of complexity to identity-based attacks. Businesses need to realise this technology poses a legitimate threat and that – without the right protection – sooner or later these attacks are going to succeed,” says Rich Turner, President, EMEA at CyberArk.

“If aspects of employees’ digital identity are stolen or faked, the potential consequences could be extremely damaging. Ultimately, employees are your first and most vital line of defence and, once that line is breached, it leaves your business at great risk. Guarding sensitive access is all-important. That way, even if attackers can use a deepfake to steal a credential and get a foothold in the business, they can’t easily get to sensitive data without detection, limiting the damage the attack may cause. Businesses should also promote a culture where it’s OK to question or challenge things that don’t look quite right.”

More UK Security News

Article: Would you spot a deepfake? - published 2 months ago.

https://securityjournaluk.com/would-you-spot-a-deepfake/   
Published: 2024 08 28 14:12:10
Received: 2024 08 28 14:21:04
Feed: Security Journal UK
Source: Security Journal UK
Category: Security
Topic: Security
Views: 0

Custom HTML Block

Click to Open Code Editor