skip to main content
4.7/5
Customers rate us on G2
See our reviews on G2.

Generative AI: Workplace Innovation or Security Nightmare

CategoryInsights
Frederick Coulton
ByFrederick Coulton
Date
Read time

The field of AI has been around for decades, but its current surge is rewriting the rules at an accelerated rate. Fuelled by increased computational power and data availability, this AI boom brings with it both opportunities and challenges.

AI tools fuel innovation and growth by enabling businesses to analyse data, improve customer experiences, automate processes, and innovate products – at speed. However, as AI becomes more and more commonplace, concerns about misinformation and misuse arise. With businesses relying more on AI, the risk of unintentional data leaks by employees also goes up.

For many though, the benefits outweigh any risks. So, how can we empower employees to harness the power of AI without risking data security?


Ready or not, AI is here to stay

Unless you’ve been living under a rock, you’ll know that generative AI (GenAI) tools have been hugely popular in recent years. Groundbreaking technology capable of producing content that appears strikingly human-made.

New generative AI tools are transforming how we work and create. They're revolutionising natural language generation, content creation, personalised recommendations, and innovative problem-solving. These models promise to reshape our interaction with technology, unlocking new avenues for efficiency, creativity, and user engagement.

This wave of innovation is reshaping industries, cementing its status as a valuable asset for businesses and individuals alike. Given the rapid pace of technological advancements, we anticipate many more compelling use cases and applications for GenAI on the horizon.

Yet, not without risks

While we may marvel at the advancements of GenAI, it's crucial to balance this excitement with an awareness of the associated risks, particularly in the areas of data privacy and technology misuse.

Many organisations are finding that the number of employees accessing AI apps is growing exponentially. According to a study by Netskope Threat Labs, during May and June 2023, the percentage of enterprise users using at least one AI app daily increased by 2.4% each week.

Additionally, a recent Deloitte study revealed that 61% of employees are currently using or planning to use generative AI. Of those using it, 26% have not informed their managers, and 24% use it despite company bans.

The growing adoption of GenAI raises the risk of unintended data exposure. Security teams often have limited visibility into the data shared on these platforms, making it harder for businesses to strike a balance between innovation and minimising security risks.

Data privacy and leakage concerns

One of the most pressing issues associated with GenAI is the risk of unauthorised data access and leakage. This arises due to two main factors. First off, AI needs a lot of data to learn and generate content, which could include sensitive personal information protected by privacy laws as well as copyrighted information used without permission.

Secondly the various stages of AI training and deployment open multiple vectors for potential leaks or breaches, with increasing sophistication in cyber attacks explicitly targeting these AI systems.

For instance, a chatbot like ChatGPT requires users to provide relevant prompts to generate responses. During this interaction, employees might accidentally or intentionally share sensitive data. Once submitted, this data could be used in training AI models. Also, because information is transmitted and stored on external servers, it cannot be retrieved once submitted.

Employees may upload sensitive data like personally identifiable information (PII), intellectual property (IP), or financial data. This could lead to external exposure and leakage which could impact the company’s reputation. An example of this was last year when it was reported that Samsung workers unwittingly leaked confidential data whilst using ChatGPT to help them fix problems with their source code. 

Misuse of technology

The very attributes that make GenAI a powerhouse—like the generation of credible and sophisticated content—also make it vulnerable to misuse.

This technology can produce misleading and hard-to-detect media, such as deepfakes, that can be used maliciously. Its capabilities can be weaponised to deceive, defame, or defraud individuals and organisations, enhancing impersonation and fraud attempts like phishing emails and fake news.

Ethical considerations must form the core of GenAI deployment strategies. There is an imperative for organisations to develop guidelines and policies that govern the responsible use of AI.

Inaccurate or dangerous responses and hallucinations

While most people are aware of GenAI being inaccurate in image —like giving people the wrong number of fingers—recent examples are emerging of GenAI responses being inaccurate or downright dangerous.

For example, in May 2024, Google Gemini briefly suggested in response to a query about cheese not sticking to pizza, that you should mix non-toxic glue into the cheese. While, a study from Perdue University in December suggested that 52% of GenAI answers to coding questions were simply incorrect.

Going forward: Gain real-time visibility to promote secure AI use

Without visibility of how employees are using AI tools, organisations can't provide the real-time coaching necessary for safe and effective use. Monitoring for the oversharing of sensitive data is crucial. Knowing when and by whom a risk occurs allows for effective mitigation and management.

To protect data privacy and curtail misuse, a determined effort that includes stringent security protocols, ethical guidelines, and continuous education is essential. Only with a comprehensive approach can we ensure that GenAI continues to be an asset rather than a liability.

Organisations should empower employees to responsibly utilise applications like ChatGPT. These tools serve specific business needs, so instead of banning them or reprimanding users, promote secure use and educate employees about potential risks. With advanced technology and strong privacy policies, we can maximise AI's potential while maintaining user trust.

insights

Nurturing a Resilient Security Culture

Discover the transformative power of security culture as we explore its three phases: from traditional training methods, through the integration of real-time testing, to the adoption of trigger-based interventions.

INSIGHTS

Top Employee Security Risks You're Probably Not Measuring

Email is just one piece of the puzzle, which is why it is crucial to consider a wide range of employee security behaviours to get a holistic view of your risks. By doing so, you can focus resources more efficiently.