Welcome to our

Cyber Security News Aggregator

.

Cyber Tzar

provide a

cyber security risk management

platform; including automated penetration tests and risk assesments culminating in a "cyber risk score" out of 1,000, just like a credit score.

iProov study highlights biometrics as solution of choice to counter deepfake risk

published on 2024-08-20 09:59:06 UTC by Rebecca Knowles
Content:

New global survey of technology decision-makers from iProov reveals that organisations have recognised that deepfakes present a risk for financial fraud, defamation and reputational damage, whenever an individual needs to verify their identity remotely.

The risk of deepfakes is rising with almost half of organisations (47%) having encountered a deepfake and three-quarters of them (70%) believing deepfake attacks which are created using generative AI tools, will have a high impact on their organisations.

iProov, AI, deepfakes and biometrics

Yet perceptions of AI are hopeful as two thirds of organisations (68%) believe that while it’s impactful at creating cybersecurity threats, more (84%) find it’s instrumental in protecting against them.

This is according to a new global survey of technology decision-makers from iProov, a leading provider of science-based biometric identity solutions, which also found three quarters (75%) of solutions being implemented to address the deepfake threat are biometric solutions.  

The Good, The Bad, and The Ugly

The Good, The Bad, and The Ugly, is a global survey commissioned by iProov that gathered the opinions of 500 technology decision makers from the UK, US, Brazil, Australia, New Zealand and Singapore on the threat of generative AI and deepfakes.  

While organisations recognise the increased efficiencies that AI can bring, these benefits are also enjoyed by threat technology developers and bad actors. Almost three quarters (73%) of organisations are implementing solutions to address the deepfake threat but confidence is low with the study identifying an overriding concern that not enough is being done by organizations to combat them. More than two-thirds (62%) worry their organisation isn’t taking the threat of deepfakes seriously enough.  

The survey shows recognition by organisations that the threat of deepfakes is a real and present threat. They can be used against people in numerous harmful ways including defamation and reputational damage but perhaps the most quantifiable risk is in financial fraud. Here they can be used to commit large-scale identity fraud by impersonating individuals in order to gain unauthorised access to systems or data, initiate financial transactions, or deceive others into sending money on the scale of the recent Hong Kong deepfake scam.

The stark reality is that deepfakes pose a threat to any situation where an individual needs to verify their identity remotely but those surveyed worry that organisations aren’t taking the threat seriously enough.   

“We’ve been observing deepfakes for years but what’s changed in the past six to twelve months is the quality and ease with which they can be created and cause large scale destruction to organisations and individuals alike,” said Andrew Bud, Founder and CEO, iProov.

“Perhaps the most overlooked use of deepfakes is the creation of synthetic identities which because they’re not real and have no owner to report their theft go largely undetected while wreaking havoc and defrauding organisations and governments of millions of dollars.”  

“And despite what some might believe, it’s now impossible for the naked eye to detect quality deepfakes. Even though our research reports that half of organisations surveyed have encountered a deepfake, the likelihood is that this figure is a lot higher because most organizations are not properly equipped to identify deepfakes. With the rapid pace at which the threat landscape is innovating, organisations can’t afford to ignore the resulting attack methodologies and how facial biometrics have distinguished themselves as the most resilient solution for remote identity verification,” added Bud. 

Regional nuances 

The study also reveals some rather nuanced perceptions of deepfakes on the global stage. APAC (51%), European (53%), and LATAM (53%) organizations are significantly more likely than North American (34%) organizations to say they have encountered a deepfake. APAC (81%), European (72%), and North American (71%) organizations are significantly more likely than LATAM organizations (54%) to believe deepfake attacks will have an impact on their organization. 

Amidst the ever-shifting terrain of the threat landscape, the tactics employed to breach organizations often mirror those used in identity fraud. Unsurprisingly, deepfakes are now tied for third place amongst the most prevalent concerns for survey respondents with the following order: password breaches (64%), ransomware (63%), phishing/social engineering attacks (61%), and deepfakes (61%) . 

AI’s not all bad 

There are many different types of deepfakes but they all have one common denominator: they are created using generative-AI tools. Organisations recognise that generative AI is innovative, secure, reliable, and helps them to solve problems. They view it as more ethical than unethical and believe it will have a positive impact on the future. And they’re taking action: just 17% have failed to increase their budget in programs that encompass the risk of AI. Additionally, most have introduced policies on the use of new AI tools.  

Biometrics leads the charge against deepfakes  

Biometrics have emerged as the solution of choice by organizations to address the threat of deepfakes. Organisations stated that they are most likely to use facial and fingerprint biometrics however, the type of biometric can vary based on tasks. For example, the study found organisations consider facial to be the most appropriate additional mode of authentication to protect against deepfakes for account access/log-in, personal details account changes, and typical transactions.  

Software is not enough 

It’s clear from the study that organisations view biometrics as a specialist area of expertise with nearly all (94%) agreeing a biometric security partner should be more than just a software product. Organisations surveyed stated that they are looking for a solution provider that evolves and keeps pace with the threat landscape with continuous monitoring (80%), multi-modal biometrics (79%), and liveness detection (77%) all featuring highly on their requirements to adequately protect biometric solutions against deepfakes. 

Survey methodology 

The Good, The Bad, and The Ugly Survey was developed in collaboration with Hanover Research. 500 global respondents were recruited across industries including Banking, eCommerce, Finance and Accounting, Healthcare/Medical, Hospitality, Insurance, Retail, Telecommunications, and Travel. This was done via a third-party panel provider and the survey was administered online in spring 2024.

Respondents were professionals in IT, Operations, Network Security, Cybersecurity, Digital Experience, Risk Management, or Product Management department with primary decision-making responsibility in the selection and purchase of cybersecurity solutions for their organisation.  

More UK Security News

Article: iProov study highlights biometrics as solution of choice to counter deepfake risk - published about 1 month ago.

https://securityjournaluk.com/iproov-study-highlights-biometrics-as-solution-of-choice-to-counter-deepfake-risk/   
Published: 2024 08 20 09:59:06
Received: 2024 08 20 10:03:05
Feed: Security Journal UK
Source: Security Journal UK
Category: Security
Topic: Security
Views: 3

Custom HTML Block

Click to Open Code Editor