Welcome to our

Cyber Security News Aggregator

.

Cyber Tzar

provide a

cyber security risk management

platform; including automated penetration tests and risk assesments culminating in a "cyber risk score" out of 1,000, just like a credit score.

Add People warns against unregulated AI use

published on 2023-10-24 09:25:55 UTC by Rebecca Knowles
Content:

A new survey by digital marketing company Add People has revealed that 1 in 3 UK workers are using generative AI tools without their boss’ knowledge.

The survey also found that only 14% of businesses have officially implemented AI tools, meaning most usage of generative AI is unregulated even where AI use is known.

Though AI tools are useful and have been proven to increase productivity, there have been numerous reports of safety concerns around data storage. These issues stem from the tool storing conversations to trial its AI model.

Add People want to raise awareness

This Cybersecurity Awareness Month, the team at Add People want to raise awareness of the importance of using ChatGPT and other AI tools safely.

Add People Chief Marketing Oficer Peter Marshall said: “This survey shows that a third of workers who use AI aren’t telling their bosses, meaning the information they’re sharing with these platforms is at risk if a data breach occurs. The survey also found that only 14% of workplaces have officially implemented AI tools which can contribute to misuse. 

“The best way to avoid insecure AI use is to raise awareness of the risks with your staff and make recommendations on when and how to use tools like Bard and ChatGPT. Creating guidelines on what information should not be shared with these tools is a good way of building a foundation for AI implementation in the workplace. A light touch approach like this helps keep AI use safe while also supporting experimentation in the early stages of application.”

Speaking to Add People, cyber security expert Ian Reynolds from SecureTeam also had this advice for safe AI use: “If 1 in 3 people using AI at work are doing so without their boss knowing, this could result in serious security issues further down the line.” He said some steps that business leaders can take include:

  • Raising awareness

When faced with the exciting prospect of new software, people can often overlook key safety considerations that are second nature to them when performing other tasks. For this reason, it’s important to refresh your employees’ minds on safety protocol regularly and raise awareness of noted safety issues with new technology like generative AI.

    This could simply take the form of a company-wide email or asking managers to discuss the uses and risks of generative AI with their staff. Regular refresher training on cybersecurity is one of the best ways to keep your company safe so if you know your employees are starting to use AI, now might be a good time to organise training.

    • Ask your team

    While a third of people are using AI without their boss knowing, this may in part be down to staff not knowing they needed to disclose their usage. By asking your team if they have ever used these tools at work and how they used them, you can understand the extent of the risk to your company’s data. This will also help you inform your approach going forward and how to strike the balance between security and innovation.

      • Running AI-specific training

      If your staff are beginning to use AI tools at work, you may want to run an AI-specific training session. This is an opportunity for you to explore how people in your industry are using AI tools, how people in your organisation have begun to use them and what the risks are. 

        With training around the specific tools, you’ll ensure that everyone understands the responsibility they hold to safeguard the data of the organisation.

        • Creating frameworks

        If your industry is engaging with AI but your organisation has yet to officially implement any tools or strategies, now might be the time to establish some basic frameworks for your staff to follow. You could do this by following frameworks from other leaders in your industry or by identifying champions in the business who can produce frameworks that follow security guidelines and promote these to other members of staff.

        • Restricting access

          If you work in a particularly sensitive industry and are concerned about the risk of data leaks through these tools, you may want to consider restricting access on your networks and company devices. While this impacts the value AI tools can bring for productivity, it may be the only way to insure yourself against data leaks which may be more risk than it’s worth.”

          More UK Security News.

          Article: Add People warns against unregulated AI use - published about 1 year ago.

          https://securityjournaluk.com/add-people-warns-against-unregulated-ai-use/   
          Published: 2023 10 24 09:25:55
          Received: 2023 10 24 09:28:33
          Feed: Security Journal UK
          Source: Security Journal UK
          Category: Security
          Topic: Security
          Views: 0

          Custom HTML Block

          Click to Open Code Editor