Welcome to our

Cyber Security News Aggregator

.

Cyber Tzar

provide a

cyber security risk management

platform; including automated penetration tests and risk assesments culminating in a "cyber risk score" out of 1,000, just like a credit score.

The fraud outlook for 2024: AI risks are on the rise

published on 2024-04-12 07:45:00 UTC by James Humphreys
Content:

In this exclusive article, Simon Horswell, Senior Fraud Specialist, Onfido discusses the fraud outlook for 2024 and how AI risks are on the rise.

If there was ever an indication of how fast technology can evolve and break into the mainstream, it’s Generative AI.

Over a year ago, only those working directly with AI or passionate about the technology would have been aware of its capabilities, but now thanks to applications like ChatGPT, it reaches from businesses to consumers to governments.

Now, research suggests more than half of US employees use generative AI for daily tasks, while an estimated four million in the UK have embraced platforms, like ChatGPT, for work.

However, the widespread accessibility of generative AI has not come without risk.

While this technology has created opportunities for some, it has also found its way into the hands of bad actors who set out to use it for malicious purposes.

We’ve seen AI quickly become infused in fraudulent activity with scams now scaled with unprecedented speed and efficiency.

As this threat has grown, lawmakers and regulators around the world have had to keep pace to curb the danger and keep businesses and consumers safe.

That’s why we’ve seen the emergence of AI Safety Summits, alongside new restrictions and regulations, that target an assortment of AI threats, like deepfakes, to reduce the likelihood of individuals and organisations being targeted.

So, what does the fraud landscape look like for the year ahead and how will AI shape it?

1. The deepfake risk continues In 2023, we witnessed a surge in AI-manipulated and synthesised media, with deepfake fraud attempts jumping 3000% year-on-year and stories emerging almost daily of scams or impersonations of public figures.

Driven by the availability of simple-to-use AI tools, fraudsters scaled sophisticated attacks without needing technical skills or heavy resources.

This is already impacting high-profile public figures, like politicians, with Keir Starmer and Sadiq Khan falling victim to deepfake attacks, while it has also impacted celebrities, such as Taylor Swift and business leaders too.

With a record-breaking 40-plus countries – representing more than 40% of the world’s population – due to hold elections this year, deepfakes have the potential to inflict significant reputational damage, destabilise trust in public leaders and spread misinformation.

What’s more, as AI continues to attract public interest, an election provides an opportune moment for a fraudster to double down on deepfakes to cause disruption.

While talk around deepfakes has focused predominantly on public figures, and understandably so, it is becoming increasingly apparent that no one is safe.

Just recently, a company employee was tricked into paying £20 million to scammers after being tricked by an individual that posed as a senior officer at the company.

In 2024, businesses and individuals alike will need to remain vigilant. Onfido’s Identity Fraud Report shows that 80% of attacks on biometric systems, like those used in e-voting, were videos of videos displayed on a screen.

This suggests fraudsters will always opt for the easiest, most cost-effective route.

Should they go mainstream, deepfakes will be a major threat to contend with.

2. Smishing and phishing attacks become more persuasive

As AI becomes ever more accessible, malicious actors are taking advantage to orchestrate large-scale and highly convincing smishing (SMS phishing) and phishing attacks.

The widespread availability of generative AI tools has lowered the barriers to entry, enabling cybercriminals to craft deceptive messages that appear more authentic than ever.

This means the typical signs of a scam we all look for, such as checking spelling and grammar mistakes, will be harder to spot.

Attackers will continue to use generative AI and large language models in the likes of phishing and smishing attempts to make the content and images appear more legitimate.

But hope is not lost – as scams become more advanced, so does the AI used to keep businesses and individuals safe.

To this effect, we are seeing an AI vs AI battleground emerge, and it’s crucial that businesses deploy systems that have been trained on the very latest attack vectors so they can stay on top of the evolving threat landscape in 2024.

3. Social engineering scams are on the rise

Contrary to using AI to create convincing spoofs, in 2024 we’ll likely see the technology used for a much simpler attack – eliciting people’s private information through social engineering scams.

Cybercriminals have shown over the past year that sophisticated technology is not always necessary to carry out successful cyberattacks at scale.

For example, Caesars Entertainment recently confirmed that a social engineering attack had stolen data from members of its customer rewards programme.

In identity verification, we’ve already seen a number of scams employed by fraudsters to get unsuspecting genuine people to help create fake accounts on their behalf.

Fraudsters have used several different scenarios to get people to complete the application process using their authentic documents and matching biometric images.

These range from fake job ads to delivery drivers asking for proof of identity on people’s doorsteps.

The difficulties around detecting these seemingly genuine applications mean they pose a real threat to many of the standard practices in remote identity verification, so businesses need to be ready with the right defences to combat these threats.

4. Rigorous regulation is on the horizon From the EU AI Act to the Online Fraud Charter, policymakers in 2023 took significant steps to protect the public from fraud.

Notably, the UK introduced a new Anti-Fraud Champion who is responsible for driving collaboration between the government, law enforcement and the private sector to help block fraud which accounts for 40% of all crime.

In 2024, we are likely to see tighter regulations come into place as governments try to help businesses keep one step ahead of bad actors.

Failure to comply with these laws and regulations can have serious consequences, including fines, legal action, and damage to reputation.

For instance, from 2024, UK banks will be required to refund customers who have been tricked by scammers in a phenomenon known as authorised push payment (APP) fraud.

With losses to APP fraud reaching almost £500m in 2022, this will no doubt raise significant concerns for banks this year.

New legislation will play a pivotal role in reducing fraud cases, but there is a delicate balance to be maintained.

While it is imperative to address the risks posed by AI, we need to be careful not to demonise the technology completely, as this would detract from its role in innovation and provide robust safeguarding against new and emerging threats.

The fraud landscape is evolving and fast

In 2024, the convergence of technology, regulation, and criminal ingenuity will shape how we experience fraud as AI proliferates and becomes the dominant means of attack.

Individuals must remain vigilant, but the responsibility is on businesses to lead the charge in providing robust protection and take responsibility for keeping bad actors out of their platforms and services.

A necessary first step is to adopt proactive cybersecurity measures, such as having the right user onboarding checks in place.

But it’s also important to embrace industry knowledge-sharing and to work with governments and regulators to ensure that new AI frameworks do not curb its potential, while ensuring it is rolled out safely and beyond the influence of bad actors.

This year marks a pivotal moment in the ongoing challenge of AI-driven fraud.

There is no more urgent time to act, and businesses need to make sure they are on the right side of the fight.

More Security News

Article: The fraud outlook for 2024: AI risks are on the rise - published 7 months ago.

https://securityjournaluk.com/fraud-outlook-for-2024-ai-risks-on-the-rise/   
Published: 2024 04 12 07:45:00
Received: 2024 04 12 07:46:55
Feed: Security Journal UK
Source: Security Journal UK
Category: Security
Topic: Security
Views: 1

Custom HTML Block

Click to Open Code Editor