Since its debut in November 2022, the artificial intelligence (AI) chatbot, ChatGPT, has surged in popularity…but it has been observed to be generating malicious code.
The chatbot rapidly gained more than one million users and has been used to assist with tasks such as composing emails, essays and code. It even wrote a song in the style of Nick Cave, which the singer dismissed as nonsense!
However, in the short time it has been available it is reported to have been supportive in attempts to generate malicious code.
The tool was released by artificial intelligence research laboratory, OpenAI, and has subsequently generated widespread interest and discussion over how AI is developing and how it could be used going forward.
Upon the release of ChatGPT last November, cybersecurity company Checkpoint released an early warning on research that demonstrated how the AI-tool could hypothetically flow through a full attack chain, from creating a spearphishing email to running a reverse shell.
In recent weeks, researchers have observed at least three instances of the tool being leveraged by threat actors to aid attacks including the scripting of malware, creation of data encryption tools, and the writing of code that can create new dark web marketplaces.
All these instances provide a viable starting point for threat actors and groups to expedite their attacks.
The primary short-term concern over ChatGPT being leveraged for malicious purposes is that it will almost certainly allow low-skilled threat actors to bridge their skills gap and enable them to carry out more nefarious activities.
Such tools provide an on-demand means of creating templates of code relevant to the threat actors’ objectives.
However, when looking at longer-term affects, there is a realistic possibility that more sophisticated threat actors could adopt ChatGPT to improve the efficiency of their activity and address any gaps they may hold.
Concerns over threat actors abusing ChatGPT have elevated since its release, with many security researchers perceiving the chatbot as significantly lowering the requirements, skillset, and entry levels for writing malware amongst other threat vectors.
There is a realistic possibility that having AI tools such as ChatGPT easily accessible will shift the cyber threat landscape, allowing low-skilled threat actors a swifter introduction into writing code and enable more sophisticated actors to speed up their activities.
Reporting
Report all Fraud and Cybercrime to Action Fraud by calling 0300 123 2040 or online. Forward suspicious emails to report@phishing.gov.uk. Report SMS scams by forwarding the original message to 7726 (spells SPAM on the keypad).
Click to Open Code Editor