A recent report on threat actors with links to hostile nation states using OpenAI’s tools to influence operations has been published, including how this has been thwarted.
This follows a trend of nation state threat actors using AI tools to advance cyber capabilities. Different AI tools have been used to generate text and images, as well as articles and social media posts.
In a recent report published by OpenAI, it was discussed that the organisation had disrupted at least 5 influence operations. These range across different hostile nation states including Russia and China.
One case study published by OpenAI, dubbed ‘Doppelganger’, investigated an operation by a persistent Russian threat actor posting anti-Ukrainian content across the internet. The threat actors used clusters of accounts using OpenAI’s tooling, with each cluster displaying different tactics, techniques and procedures (TTPs), made up of different functional teams.
The content was designed to target audiences in Europe and North America, focused on generating content for websites and social media. Once content was published on a site, up to five accounts would interact, often commenting on the posts.
An investigation into these accounts revealed that they only ever interacted with the fake content, likely to increase the posts visibility.
An impact assessment was conducted and determined that there was no substantial positive engagement across authentic audiences on these social media sites.
It does, however, draw attention to the dedication and capabilities hostile nation states use to attempt to influence audiences.
While this activity only related to the current war in Ukraine, it could be utilised for internal issues relating to the United Kingdom, attempting to create adverse opinions on certain subjects, which could include policing.
Due to the growing development of AI tooling and capabilities, threat actors are likely to increase their use, not only for content development but also for malicious code development and reviews. This will increase the complexity of detection, both for end users and security teams.
Reporting
Report all Fraud and Cybercrime to Action Fraud by calling 0300 123 2040 or online. Forward suspicious emails to report@phishing.gov.uk. Report SMS scams by forwarding the original message to 7726 (spells SPAM on the keypad).
Click to Open Code Editor