https://ploum.nl/uploads/Artikelen_en_Track_Records_en_expertise/AI/pexels-bertellifotografia-16094056.jpg

AI in the Workplace: Beware of Data Leaks

04 Feb '26

Author(s): Michelle Westhoeve

We have previously written about the use of AI in the workplace and AI literacy. Recently, things went wrong: the municipality of Eindhoven reported a data leak because employees uploaded personal data into public AI tools. How can employers prevent such incidents?

The Risks of AI Without Policy

The situation in Eindhoven highlights the dangers of uncontrolled AI use by employees. Personal data was unintentionally shared via public AI tools – not out of malice, but due to ignorance and convenience. This resulted in citizens’ and colleagues’ data leaving the organization, with unclear scope and impact, possibly risking a data breach.

This is not just a theoretical risk: with public AI chatbots like ChatGPT, inputted text (such as company documents, code, or personal data) may be stored and used to improve the model. If a similar question is asked later, fragments of the original input could be shown to another user.

Since AI tools are as accessible as email or search engines, the risk lies not in the technology itself, but in everyday usage. Many employees don’t realize that entered data in public AI tools is processed on external servers and may be stored and used for model development. Entering personal data, sensitive business information, or confidential documents in public tools like these results in losing control over that data, and potentially violating the GDPR.

What Can Employers Do?

Many employers already have policies on confidentiality and handling personal data. It’s wise to review these policies: is it clear to employees that they must not upload information to public AI tools, and why?

As with many other types of workplace policies, be very clear and consistently adhere to your own policy. As an employer, include the following in your policy:

  • Specify which public AI tools may and may not be used;
  • Clarify how public AI tools can be used (e.g., for translating general texts without sensitive data, but not for optimizing customer files);
  • List available internal alternatives and how they should be used;
  • Describe the procedure if an employee is unsure about using AI (e.g., consult a manager or IT staff first);
  • State possible consequences for policy violations (such as a warning, additional training, or in severe cases, dismissal).

Enforcement: Be Consistent

Employers have the right to enforce consequences for breaches of AI policy. This is not to punish every minor mistake, but to manage risks, similar to other compliance and safety rules. Repeated or deliberate mishandling of sensitive data may justify disciplinary measures.

At the same time, employees must understand the policy. Provide training and raise awareness about AI to ensure safe usage. Employees should also feel safe to ask questions and discuss uncertainties. If people fear sanctions for every mistake, they may hide errors instead of reporting them. Involving staff in safe AI use helps ensure the policy is accepted and followed.

Implementing AI Policy for Employees

In short: employers should review and tighten their guidelines on confidentiality, personal data, and IT use as needed. Involve staff or the works council in the process. This way, you can safely implement AI within the organization without risking the trust or data of customers, citizens, or employees.

Need help drafting policies? Or want to learn more about AI in the workplace and what you can do as an employer, employee, or works council member? Feel free to contact our Employment Law team or one of our AI specialists. Or sign up for our newsletter to stay updated on the latest developments!

Contact

Attorney at law

Michelle Westhoeve

Expertises:  Employment law, Privacy law, Employee participation, Transport and Logistics, Distressed companies,

Share this article

Stay up to date

Click on the plus and sign up for updates on this topic.

Met uw inschrijving blijft u op de hoogte van de laatste juridische ontwikkelingen op dit gebied. Vul hieronder uw gegevens in om per e-mail op te hoogte te blijven.

Personal data

 

Company details

For more information on how we use your personal information, please see our Privacy statement. You can change your preferences at any time via the 'Edit profile' link or unsubscribe via the 'Unsubscribe' link. You will find these links at the bottom of every message you receive from Ploum.

* This field is required

Interested in

Personal data

 

Company details

For more information on how we use your personal information, please see our Privacy statement. You can change your preferences at any time via the 'Edit profile' link or unsubscribe via the 'Unsubscribe' link. You will find these links at the bottom of every message you receive from Ploum.

* This field is required

Interested in

Create account

Get all your tailored information with a My Ploum account. Arranged within a minute.

I already have an account

Benefits of My Ploum

  • Follow what you find interesting
  • Get recommendations based on your interests

*This field is required

I already have an account

Benefits of My Ploum

Follow what you find interesting

Receive recommendations based on your interests

{phrase:advantage_3}

{phrase:advantage_4}


Why do we need your name?

We ask for your first name and last name so we can use this information when you register for a Ploum event or a Ploum academy.

Password

A password will automatically be created for you. As soon as your account has been created you will receive this password in a welcome e-mail. You can use it to log in immediately. If you wish, you can also change this password yourself via the password forgotten function.