This page has been automatically translated. Please refer to the page in French if needed.
Digital
The AI Act: what changes for companies?
Publié le 06 octobre 2025 - Directorate of Legal and Administrative Information (Prime Minister)
The AI Act is a framework for the use of artificial intelligence. What impact will this regulation have on companies? Decryption.

What is the AI Act?
The AI Act (EU Regulation 2024/1689), in force since 1er August 2024, is the world's first AI regulation.
It oversees the development of AI to protect human rights and user safety. It also encourages investment in innovation through “regulatory sandboxes”. These experimental spaces provide a controlled environment for companies to “develop, train, test and validate innovative AI systems.”
The aim of the AI Act is to achieve AI “trustworthy”. It concerns all organizations, including companies, that provide, distribute or deploy artificial intelligence systems or models.
This Regulation adopts a hierarchical approach to the risks associated with AI systems:
- the unacceptable risks are strictly prohibited systems (manipulation, exploitation of vulnerabilities, biometric categorization, etc.);
- the systems to be high risk are those with a significant impact and which are already governed by European regulations (biometrics, safety, education, employment, medical devices, etc.);
- the limited risks systems subject to the obligation to inform users of an interaction with artificial intelligence;
- the minimal or non-existent risks are systems that present little or no identified risks (spam filters, etc.).
What application schedule?
The AI Act applies gradually between 2025 and 2027.
1er August 2024 | Entry into force of the AI Act |
February 2, 2025 | Prohibition of AI systems at unacceptable risk |
August 2, 2025 |
|
August 2, 2026 |
|
August 2, 2027 | Application of the Regulation to products incorporating high-risk AI |
What are the consequences of the AI Act for companies?
As of 2 August 2026, companies presenting or developing products incorporating high-risk AI systems will have to:
- be registered in the EU database;
- obtain a CE marking before marketing, it allows to indicate that a product is legally compliant with the EU criteria;
- establish a documented and updated risk management system;
- develop comprehensive documentation to understand the functioning of the AI system and ensure transparency and traceability (in the form of a leaflet);
- implement human control of the AI system prior to its entry into service or the placing of a product on the market, as well as a mechanism to guide and inform the person in charge of the control;
- maintain a register to ensure data protection and assess the level of regulatory compliance on artificial intelligence, as well as the establishment of a system adapted to cybersecurity risks;
- ensuring the continued quality of the high-risk AI system, “its robustness, accuracy and cybersecurity” through technical and organizational measures.
Are there sanctions for non-compliance with the AI Act?
Administrative penalties are provided for in the AI Act for non-compliance. The fine imposed varies according to certain criteria such as the risk category concerned or the size of the company.
Additional topics
European Commission