Skip to main content

Artificial Intelligence has been around for a while and has gained so much popularity especially since the launch of Generative AI technologies like ChatGPT. This has led to a widespread adoption in many organization’s and users’ everyday life.  These transformative AI tools are now commonly used for tasks such as report writing, research, data analysis, and coding. Their speed, efficiency and cost-effectiveness have made them highly acceptable among the executives and employees. 

However, this rapid adoption comes with significant security risks. A survey by Cybsafe revealed that 64% of US office workers have entered work information into AI tools, with another 28% not sure if they have. This means that about 93% of workers might be sharing confidential information with AI systems, raising concerns about data leakage, breaches and unauthorized access. 

Data Leakage, also known as Information Leakage, occurs when sensitive data accidentally leaves an organizations’ secure networks. Data leakage in AI occurs when sensitive information is accidentally exposed during AI interactions. This can happen through various channels, including misconfigured servers, insecure data storage, and during information exchange between AI models and users.

A real-world example of this occurred in May 2023, when Samsung discovered its employee accidentally leaked sensitive internal source code and an entire meeting transcript by uploading it to ChatGPT. Such incidents can lead to severe consequences:

  • Exposure of sensitive data such as Personal Identifiable Information (PII) which puts employee and customer privacy at risk.
  • Increased risk of identity theft and social engineering attacks.
  • Compromise of company secrets and intellectual property.
  • Reputational damage to both the company and the AI service provider.
  • Potential penalties from regulatory bodies with other financial and legal issues.

To protect their data, companies can take the following steps to mitigate these risks:

  • Develop clear policies: Executives should create strong policies and guidelines for using AI tools in the workplace. Potentially, they can limit or totally ban their use for sensitive tasks.
  • Educate Employees: Companies should organize security training on the safe use of AI tools. They should also create awareness about the potential security risks associated with these tools.. This will better equip employees on the right way to maximize the benefits of generative AI without creating security challenges.
  • Leverage AI for security: Explore AI-driven solutions to prevent data leaks and automate security measures. In cases where this is possible, companies can develop their own in-house solutions.
  • Enhance data protection: Prioritize data security by implementing strong encryption and access controls to safeguard data during AI interactions.
  • Conduct regular audits: This regular audits will help security teams identify vulnerabilities and address them before they lead to data leaks.
  • Collaborate with AI providers: Companies can work with AI companies to develop features and policies that will enhance data security and privacy.

As individual users, we must be cautious about the type and amount of sensitive information we share with AI systems. It’s wise to be aware of services that store user data and manually disable such features when possible. The adoption of generative AI technology in the workplaces is set to increase, making it important to have proper guardrails to protect against cyber risk and improve security posture.

About the author: