Abhishek Karnik, Senior Director of Research at McAfee on staying safe in the age of AI.
2023 marked the acceleration of low cost and easy-to-use generative AI tools. For all the opportunities these tools unlock, there are cybercriminals seeking out ways to utilise them to spread disinformation and run sophisticated scams at scale.
When it comes to AI-generated audio, most of the scams we’ve been learning of at McAfee involve a cybercriminal impersonating someone familiar to the victim, such as a friend or family member, and saying that they’ve been in a car accident or are in some kind of trouble and need urgent help… and money.
Concerned by the efficacy of this scam, we started to investigate it a little more. We found that almost a quarter of Brits say themselves or a friend have already experienced an AI voice scam like this. Nearly 80% of victims confirmed they had lost money as a result.
With 65% of adults not confident that they could identify a cloned version of a voice from the real thing, it’s no surprise that this technique gained momentum, especially as we found more than a dozen freely available AI voice-cloning tools on the internet.
As we look ahead to 2024, we’re anticipating a new wave of elevated AI-generated scams, mis- and disinformation, and online threats. Here’s what to look out for.
2024 will see several pivotal elections across the globe, including the United States Presidential Election, the Indian general election, European Union Parliament elections and, potentially, a UK general election.
With generative AI becoming more accessible than ever, it can be expected that this year’s election cycle will be significantly impacted by deepfakes and disinformation. Using just a small sample of a candidate’s voice, bad actors can create realistic voice clones that could be used to damage their reputation and credibility.
A recent McAfee study found that when asked what potential uses of deepfakes are most concerning, 37% of Brits said influencing elections. In the last year, 35% of people have seen deepfake content with over a fifth (21%) coming across a video, image, or recording of a political candidate which they thought was real at first.
While many voters will likely raise a sceptical brow at statements made by politicians to discredit their opponents, when supported by convincing deepfakes, any defamatory statements are likely to appear more believable. To avoid being misled by disinformation, it is important to check the facts using multiple sources, particularly if you are inclined to share the content as this could further spread disinformation.
To help people protect themselves from being misled by online manipulation, at McAfee we are developing AI-powered innovations, such as Project Mockingbird, which uses a combination of contextual, behavioural, and categorical detection models to detect and expose AI-generated, maliciously altered audio in videos with 90% accuracy.
Aside from elections, the 2024 Summer Olympics in Paris are on the horizon and an event with this level of appeal is likely to attract cybercriminals looking to capitalise on fans’ excitement.
Cybercriminals have hooked onto popular events for phishing texts and emails for years, but these scams are becoming harder to identify as generative AI removes the traditional hallmarks of misspelled words and poor grammar. Generative AI also allows scammers to custom create phishing websites in different languages in order to target individuals based on locale. Combine that with the excitement surrounding the Olympic games, and users may be tempted by that email or website promising a chance to win tickets with one click.
Emails, text messages, plus other messaging channels like WhatsApp and Telegram, and even social platforms, are all fair game, so it’s essential to remain vigilant and pause to think before clicking links or giving out personal or banking information.
Scammers exploit emotions – such as the excitement of the Olympics. Sadly, they also tap into fear and grief. A particularly heartless method of doing this is through charity fraud. While this takes many forms, it usually involves a criminal setting up a fake charity site or page to trick well-meaning contributors into thinking they are supporting legitimate causes or contributing money to help fight real issues.
2024 will see this continue. We further see potential for this to increase given the conflicts in Ukraine and the Middle East. Scammers might also increase the emotional pull of the messaging by tapping into the same AI technology we predict will be used in the 2024 election cycle. Overall, expect their attacks to look and feel far more sophisticated than in years past.
There are two sides to every story, and AI is no different. While AI presents new cybersecurity risks, it also has the potential to transform cybersecurity in a positive direction by improving threat detection, prevention, and response.
AI is constantly learning, which means it can analyse vast amounts of data, far more than human cybersecurity professionals, to identify patterns, anomalies, and other indicators of threats, and it does all of that in real-time. It can also leverage historical data to detect emerging threats, like phishing attempts, insider threats and malware.
McAfee’s AI-powered Scam Protection, for example, proactively blocks dangerous links that appear in text messages, social media, or web browsers and allows users to engage with text messages, read emails, and browse the web peacefully and securely.
I’m intrigued at what else 2024 has in store. What is certain though, as scammers continue to leverage AI, so too are the security professionals and threat researchers trying to stop them.
Abhishek Karnik is the Senior Director of Research at McAfee and leads a global team of experts on cybersecurity threats and intelligence with a focus on providing protection content to McAfee products.
Click to Open Code Editor