https://ploum.nl/uploads/Artikelen_en_Track_Records_en_expertise/AI/pexels-markus-winkler-1430818-18548425.jpg

New rules AI Act applicable

08 Aug '25

Author(s): Matthijs Gardien, Bine Schoenmaker en Leonoor Dröge

On 1 August 2024, the AI Act officially entered into force. Since then, the various parts of the AI Act have been gradually applied. The AI Act distinguishes between three risk groups: AI systems with an unacceptable risk (prohibited AI systems), AI systems with a high-risk, and other AI systems with a lower risk that are only subject to transparency obligations. On 2 February 2025, certain AI systems will already be completely prohibited. These include AI systems that result in unequal treatment. Now, a new part of the AI Act has become applicable. By August 2027, the AI Act will be fully applicable.

New obligations for providers of AI systems

On 2 August 2025, a number of new articles of the AI Act became applicable. These apply to providers of general-purpose AI models, such as generative AI, including ChatGPT. They mainly concern rules on technical documentation and transparency of AI models. Providers must be transparent about how AI models are trained using copyright-protected works, and providers are required to design their AI models in such a way that they cannot distribute illegal content. Providers of AI models with systemic risks are subject to even more rules, consisting of additional documentation requirements and additional transparency requirements. A systemic risk could include negative effects on public safety through the use of the AI system. Providers of AI models with systemic risks must also ensure an appropriate level of cybersecurity for the AI model and its physical infrastructure.

New rules on the supervision of AI systems

In addition, rules on the supervision of AI systems have started to apply. In addition to national-level authorities, two new EU-level authorities have been established to supervise the AI Act. The first is the AI Office, which was established on 16 June 2024. This authority is part of the European Commission and will ensure that the AI Act is applied uniformly across the EU. In addition, the AI Office monitors general-purpose AI models and supervises the emergence of systemic risks, technological and market developments, and the entire AI ecosystem. The second body is the European Artificial Intelligence Board (the “Board”), which was established on 1 August 2024 at the same time as the AI Act came into force. The Board works closely with the AI Office, but consists of one representative from each Member State. The Board's task is to promote consistency and coordination between Member States. These two bodies already existed, but from 2 August 2025, the AI Office will perform specific tasks in response to the new rules that became applicable with regard to general-purpose AI systems. For example, if there is an incident involving a general-purpose AI model with systemic risks, this must be reported immediately to the AI Office and, where appropriate, to the national authorities.

New rules on fines

Finally, on 2 February 2025, several provisions on fines became applicable. The use of the aforementioned prohibited AI systems may result in an administrative fine of up to EUR 35 million or, if the offender is a company, up to 7% of its total worldwide annual turnover for the previous financial year, whichever is higher. These fines are imposed by the national authorities.

Other developments

In addition to the AI Act, there is an AI Pact, which was established by the European Commission and has now been signed by more than 200 multinational companies from various sectors. The European Commission celebrated the signing by these first 200 companies in September 2024. The AI Pact is based on two pillars. The first pillar focuses on bringing together various stakeholders with the AI Office so that they can share knowledge and experiences. This will enable them to prepare for the full implementation of the AI Act. Under the second pillar, stakeholders declare their commitment to the principles of trustworthy and ethical AI. Interested parties can still join the AI pact.

And in the future?

The following parts of the AI Act will apply to regulations relating to high-risk AI systems on 2 August 2026. A year later, on 2 August 2027, the rules relating to high-risk safety components will apply. Companies would therefore be wise to start making the necessary preparations now!

Want to know more? Please contact Matthijs Gardien, Bine Schoenmaker and/or Leonoor Dröge.

Contact

Attorney at law, Partner

Matthijs Gardien

Expertises:  Contract law, Litigation, IT-Law, Cybersecurity , Privacy law, Start-up and Scale-up, Commercial Contracts, E-commerce, Artificial intelligence,

Attorney at law

Bine Schoenmaker

Expertises:  IT-Law, Privacy law, Contract law, Technology, Media and Telecom, Healthcare, Artificial intelligence, Commercial Contracts,

Attorney at law

Leonoor Dröge

Expertises:  IT-Law, Privacy law, Contract law, Arbitration, Litigation, Technology, Media and Telecom, Artificial intelligence, Commercial Contracts, E-commerce,

Share this article

Stay up to date

Click on the plus and sign up for updates on this topic.