Welcome to our

Cyber Security News Aggregator

.

Cyber Tzar

provide a

cyber security risk management

platform; including automated penetration tests and risk assesments culminating in a "cyber risk score" out of 1,000, just like a credit score.

CETaS: prepare for harms posed by generative AI

published on 2023-12-18 14:28:03 UTC by Rebecca Knowles
Content:

The Centre for Emerging Technology and Security (CETaS) at The Alan Turing Institute has urged the UK national security community to prepare for harms posed by generative AI.

Generative AI use could cause significant harm to the UK’s national security in unintended ways, according to a report published by the Centre for Emerging Technology and Security (CETaS) at The Alan Turing Institute. 

CETaS highlights key areas of concern

The report, titled ‘Generative AI and National Security: Risk Accelerator or Innovation Enabler?’, highlights key areas of concern that need to be addressed in order to protect the nation from threats posed by these powerful technologies.

To date, conversations about threats have focused primarily on understanding the risks from groups or individuals who set out to inflict harm using generative AI. These threats are varied and include disinformation campaigns, cyberattacks, weapon instruction, radicalisation and the generation of child sexual abuse material.

Generative AI will amplify the speed and scale of these activities such that a larger proportion of the population than before will be vulnerable to harm.

But while it is crucial to manage these threats, the report also urges policymakers to plan for the unintentional risks posed by improper use and experimentation with generative AI tools, and excessive risk-taking as a result of over trusting AI outputs and a fear of missing out on the latest technological advances.

Risks could be seen through:

  • Adoption of AI in critical national infrastructure or its supply chains
  • Use of AI in public services responsible for areas like health, policing, education and welfare
  • Private sector/DIY experimentation, where the fear of missing out on AI advances could cloud judgments about higher risk use cases

Building on the momentum from the recent AI Safety Summit the report makes policy recommendations for the new AI Safety Institute and other Government departments and agencies which could help address both malicious and unintentional risks.

These include recommendations on evaluating AI systems, and the use of generative AI for intelligence analysis.

The report highlights that autonomous AI agents will accelerate both opportunities and risks in the security environment and offers recommendations to ensure their safe and responsible use.

Recommendations for specific risk categories

Recommendations for specific risk categories are also addressed, including political disinformation, radicalisation and terrorism, voice cloning, biochemical weapons and child sexual abuse material.

Ardi Janjeva, Research Associate from CETaS at The Alan Turing Institute, said: “Generative AI could offer opportunities for the national security community, but it is currently too unreliable and susceptible to errors to be trusted in the highest stakes contexts.  Policymakers must change the way they think and operate to make sure that they are prepared for the full range of unintended harms that could arise from improper use of generative AI, as well as malicious uses.”

The research team consulted with over 50 experts across government, academia, civil society and leading private sector companies, with most deeming that unintended harms are not receiving adequate attention compared to adversarial threats national security agencies are accustomed to facing.

Political disinformation and electoral interference

The report analyses political disinformation and electoral interference and raises particular concerns about the cumulative effect of different types of generative AI technology working to spread misinformation at scale. There are fears that combining generative text, image, video and audio will exceed the impact that any one of those formats can have individually, and that debunking a false AI-generated narrative in the hours or days preceding an election would be particularly challenging.

For example, an AI-generated video of a politician delivering a speech at a venue they never attended may be seen as more plausible if presented with an accompanying selection of audio and imagery, such as the politician taking questions from reporters and text-based journalistic articles covering the content of the supposed speech.

Professor Mark Girolami, Chief Scientist at The Alan Turing Institute, said: “Generative AI is developing and improving rapidly and while we are excited about the many benefits associated with the technology, we must exercise sensible caution about the risks it could pose, particularly where national security is concerned.

“With elections in the US and the UK on the horizon, it is vital that every effort is made to ensure this technology is not misused, whether intentional or not.”

More UK Security News.

Article: CETaS: prepare for harms posed by generative AI - published 11 months ago.

https://securityjournaluk.com/cetas-prepare-for-harms-posed-by-generative-ai/   
Published: 2023 12 18 14:28:03
Received: 2023 12 18 15:46:35
Feed: Security Journal UK
Source: Security Journal UK
Category: Security
Topic: Security
Views: 0

Custom HTML Block

Click to Open Code Editor