Welcome to our

Cyber Security News Aggregator

.

Cyber Tzar

provide a

cyber security risk management

platform; including automated penetration tests and risk assesments culminating in a "cyber risk score" out of 1,000, just like a credit score.

What role could AI have in public votes this year?

published on 2024-02-29 14:52:05 UTC by philviles
Content:

AI technology could play a pivotal role in public votes this year, both negatively and positively. But how will we know what’s real and what’s fake?



This year is set to be a milestone in terms of the potential power of public voting for two of the most influential governments, with the public of both the UK and USA, amongst other nations, set to choose their next leaders by way of a UK General Election and the USA Presidential elections.


2016 was a pivotal year as Russia attempted to undermine the US elections using malicious social media campaigns to influence voting, and it’s this interference that sparks both concern, and more positively, mitigation to prevent such actions, particularly with the rise of deepfake technology.


A plain-looking office building in St Petersburg was the source of the 2016 US election interference, where Russians were paid to create thousands of social media posts hoping to influence the votes of the American people.


They would organise protests and push racist agendas in a bid to divide the nation and cause chaos prior to the polls opening. The “Internet Research Agency” spread their disinformation - false information intended to deceive - that was, in turn, unwittingly shared by the public as misinformation - false information NOT intended to deceive.


In February, police in Hong Kong reported how a finance employee at a multinational corporation fell victim to a scam involving deepfake technology during a video conference call, resulting in a fraudulent payout of $25 million after the perpetrator impersonated the company's CFO.


The staff member joined a video call that displayed what they thought were several senior members of the organisation but were in fact computer generated deepfakes. This tactic emphasises how the technology can be used by threat actors and fraudsters for financial gain, but the same tactics can be used to influence the thoughts and views of the public with regards to their political stance.



Iranian state backed threat actors recently hijacked a TV streaming service across the United Arab Emirates to push the topic of the Israel/Hamas conflict by way of an AI generated deepfake “news presenter” displaying unverified images of Palestinians being attacked by Israelis.


The UK also felt the potential of deepfake technology in September 2023, during the Labour Party conference, when a social media post surfaced featuring an audio clip allegedly depicting Labour leader Sir Keir Starmer verbally berating aides. Although swiftly debunked as a fabrication, the clip garnered 1.5 million views.


Additionally, in November, a fake audio recording of London Mayor Sadiq Khan purportedly advocating for the rescheduling of Armistice Day due to a pro-Palestinian demonstration gained significant traction across social media platforms.


Anthropic , the AI company in which Amazon are investing $4bn, has revealed new technology focused on countering election misinformation leading up to the 2024 US presidential election, while Google is gearing up an initiative to aid voters in identifying political misinformation as it arises.


Anthropic are the engineers behind Claude 2, a popular large language model used by DuckDuckGo and Quora. An LLM is a type of AI that’s been designed to understand, generate, and sometimes translate human language prompts. The new technology, called Prompt Shield, displays a pop-up whenever a user is searching for voting information and will offer to redirect the user to a reliable website to obtain verified, up-to-date voting data.


Google informed Reuters last week of its intention to initiate an anti-misinformation campaign across five EU countries prior to the European parliamentary elections in June. The campaign will involve deploying ads on social media platforms employing "pre-bunking" strategies, aimed at protecting individuals against false narratives, thus enabling them to discern genuine misinformation more effectively. These adverts will be pushed out across Belgium, France, Germany, Italy and Poland.


With the increase in the use of both disinformation and deepfake technology to influence real world events, the fact that organisations are looking at mitigation to tackle such tactics brings hope that the malicious intentions can, at least, be reduced.


The democratic foundations of countries employing genuine and ethical voting systems deserve the right to trust the integrity of campaigns and election mechanisms.


To see further guidance on the threat posed by deepfakes, watch the below video by James Cleverly from his time as foreign secretary.




Reporting

Report all Fraud and Cybercrime to Action Fraud by calling 0300 123 2040 or online. Forward suspicious emails to report@phishing.gov.uk. Report SMS scams by forwarding the original message to 7726 (spells SPAM on the keypad).



Article: What role could AI have in public votes this year? - published 9 months ago.

https://www.emcrc.co.uk/post/what-role-could-ai-have-in-public-votes-this-year   
Published: 2024 02 29 14:52:05
Received: 2024 04 02 11:26:37
Feed: The Cyber Resilience Centre for the East Midlands
Source: National Cyber Resilience Centre Group
Category: News
Topic: Cyber Security
Views: 1

Custom HTML Block

Click to Open Code Editor