https://ploum.nl/uploads/Artikelen_en_Track_Records_en_expertise/Samenwerken_Teams_Overeenkomst/job-g90fdbb871_1920.jpg

AI in the Workplace: What HR and Management Need to Know About the AI Act

16 Dec '25

Author(s): Michelle Westhoeve

The use of AI in HR processes is no longer a future scenario but a current challenge. Employers would therefore be well advised to prepare for the European AI Act. The AI Act will enter into force in stages; for example, the rules on high-risk systems will apply as of 2 August 2026. What do employers need to know, and how can they prepare?

When is AI considered “high risk”?

The AI Act classifies systems according to risk. Depending on the risk category, more or less extensive obligations apply to the developers and users or ‘deployers’ of these systems. Employers will generally be deployers.

In an HR context, AI qualifies as high risk as soon as it affects the career prospects or employment conditions of applicants or employees. Examples include automated applicant screening, algorithms that determine bonuses or promotions, automated workforce scheduling, or AI-driven disciplinary decisions (dismissal, warnings, performance improvement plans).

Many AI systems used in HR will therefore qualify as high-risk systems. Employers using such systems should prepare for the new obligations that will apply from 2 August 2026.

Please note: some AI systems in HR may even be prohibited. This includes AI systems that recognize emotions in the workplace, or systems designed to infer sensitive information about employees (such as race, sexual orientation, or political beliefs) from biometric data.

What the AI Act will require from employers using high-risk systems

Follow the instructions

This may seem self-evident, but an AI tool must be used in accordance with the developer’s instructions and user manual. Employers must ensure that their staff are properly trained and capable of following these instructions.

Human oversight is mandatory

AI may never be the sole decision-maker. A human must be able to intervene, correct, or explain decisions. For example, if an algorithm rejects a candidate, the recruiter must be able to explain why and must be able to adjust or override that decision.

Risk management: demonstrate that risks have been assessed

Employers who use AI for HR purposes and have control over the data used to train or test the AI tool must be able to demonstrate:

  • That the datasets used are representative and relevant for the intended purpose;
  • That active efforts have been made to identify bias;
  • That risks have been identified and mitigated.

Example: an AI tool that selects and ranks candidates may be trained on an apparently neutral and representative dataset, such as the employer’s own workforce data. In practice, however, this data may result in certain individuals being ranked lower, for example because candidates with part-time work or career breaks receive lower scores. If this group consists mainly of women, this leads to (unintended) discrimination.

An employer must have identified such bias in advance and taken measures to remove it from the system. This is not a one-off exercise, but a continuous obligation.

Transparency: employees and applicants must be informed

Under the AI Act, affected individuals have the right to:

  • Know which AI system is being used and for what purpose;
  • Receive an explanation of how the system influences decisions;
  • Access a complaints procedure if they disagree with the outcome.

Please note: data protection and equal treatment laws may also apply. Individuals may be able to request information or file a complaint under the GDPR or equal treatment legislation as well.

Logging: decisions must be traceable after the fact

Employers must be able to demonstrate how a decision was reached. This requires logging the input used by the AI tool, the process applied, and the resulting output. If a human subsequently reviewed the decision, this must also be recorded. In other words, the use of AI must not be a “black box,” but fully transparent.

The role of the works council

Employee participation also plays a role. If an employer intends to introduce a high-risk system in the workplace, employees and their representatives (such as a works council, staff representation body, or trade union) must be informed in advance. In many cases, the works council will also have a right of consent for the introduction of such systems.

Practical steps: how to take action as an organization

  1. Mapping: identify which AI systems are already in use within the organization and which qualify as high risk.
  2. Documenting: record how decisions are made, how risks are analyzed, and how oversight is organized.
  3. Involving the works council: ensure timely involvement when AI is to be used for HR-related purposes.
  4. Training: HR professionals and managers must know how to review and correct AI-driven decisions. Ensure they are AI literate and receive appropriate training and education.

Would you like to learn more about AI in the workplace and what this means for you as an employer, employee, or works council member? Feel free to contact our Employment Law team or one of our AI specialists. You can also subscribe to our newsletter to stay informed about the latest developments.

Contact

Attorney at law

Michelle Westhoeve

Expertises:  Employment law, Privacy law, Employee participation, Transport and Logistics, Distressed companies,

Attorney at law

Bo Leeuwestein

Expertises:  Employment law, Employee participation, Healthcare,

Share this article

Stay up to date

Click on the plus and sign up for updates on this topic.

Met uw inschrijving blijft u op de hoogte van de laatste juridische ontwikkelingen op dit gebied. Vul hieronder uw gegevens in om per e-mail op te hoogte te blijven.

Personal data

 

Company details

For more information on how we use your personal information, please see our Privacy statement. You can change your preferences at any time via the 'Edit profile' link or unsubscribe via the 'Unsubscribe' link. You will find these links at the bottom of every message you receive from Ploum.

* This field is required

Interested in

Personal data

 

Company details

For more information on how we use your personal information, please see our Privacy statement. You can change your preferences at any time via the 'Edit profile' link or unsubscribe via the 'Unsubscribe' link. You will find these links at the bottom of every message you receive from Ploum.

* This field is required

Interested in

Create account

Get all your tailored information with a My Ploum account. Arranged within a minute.

I already have an account

Benefits of My Ploum

  • Follow what you find interesting
  • Get recommendations based on your interests

*This field is required

I already have an account

Benefits of My Ploum

Follow what you find interesting

Receive recommendations based on your interests

{phrase:advantage_3}

{phrase:advantage_4}


Why do we need your name?

We ask for your first name and last name so we can use this information when you register for a Ploum event or a Ploum academy.

Password

A password will automatically be created for you. As soon as your account has been created you will receive this password in a welcome e-mail. You can use it to log in immediately. If you wish, you can also change this password yourself via the password forgotten function.