Artificial Intelligence Act
AI Act
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024, which lays down harmonised rules on artificial intelligence, will enter into force on 2 August 2026. The EU AI Act is a new law to ensure that AI systems placed on the EU market are safe, trustworthy and respect fundamental rights. Its aim is to strengthen the internal market and boost innovation (especially for SMEs), while protecting health, safety, democracy and the rule of law. The Act lays down uniform EU-wide rules: it defines which AI uses are banned, sets strict requirements for high-risk AI systems, mandates transparency measures (like labeling AI-generated content), and creates processes for market surveillance and enforcement.
Quick Access
Table of contents
Frequently Asked Questions – EU Artificial Intelligence Act
1. What does the AI Act regulate?
The Act covers all AI systems that are placed on the EU market or used within the EU (even if the provider is outside the EU). It sets common rules for how AI can be sold, deployed, and used. This includes: banning certain harmful AI practices (Art. 5); imposing strict requirements on "high-risk" AI (like in critical sectors); establishing transparency obligations (like disclosure and labeling); and creating an EU database and oversight mechanisms. It also regulates large general-purpose AI models (like big language models) by requiring that very powerful systems be tested and monitored.
2. Who must follow the Act?
It applies to a wide range of actors. Providers of AI systems (those who develop and place them on the market or put them into service) are covered, whether they are in the EU or abroad (if the AI’s output is used in the EU). So are deployers (anyone using an AI system under their control in the EU). Importers, distributors and manufacturers bundling AI with products must comply. Even non-EU entities offering AI to the EU market must follow the rules. The only major exceptions are military, personal household use, and pure R&D not yet placed on the market.
3. What are "high-risk AI systems"?
High-risk systems are those that could significantly harm people’s health, safety or fundamental rights. These include AI used in critical infrastructure, medical devices, hiring and HR, education and exams, essential public services (e.g. welfare), law enforcement and border control, and legal or democratic processes. High-risk AI must meet strict obligations including testing, documentation, and conformity assessment.
4. What AI practices are prohibited?
Article 5 bans several dangerous AI practices:
- Subliminal or manipulative AI that distorts behavior or exploits vulnerabilities
- Social scoring systems that unfairly rank people
- Unauthorized predictive policing using profiling
- Biometric categorization to infer sensitive traits
- Emotion recognition in schools/workplaces (with exceptions)
- Untargeted facial scraping for recognition databases
Certain real-time facial recognition by law enforcement is also banned except under strict conditions.
5. What must companies do to comply?
Providers of high-risk AI must:
- Implement a risk management and quality system
- Maintain technical documentation and activity logs
- Conduct a conformity assessment before marketing
- Issue an EU Declaration of Conformity and affix CE marking
- Provide detailed user instructions and warnings
They must monitor system performance and report serious incidents.
6. How does it affect businesses and developers?
Businesses face increased compliance burdens for high-risk AI, including documentation, testing, and redesign. However, the Act replaces fragmented national rules with a single EU framework, reducing complexity. It also supports innovation through regulatory sandboxes and SME exemptions. Non-EU providers must also comply if targeting the EU market.
7. How will the Act be enforced, and what are the penalties?
National authorities will oversee enforcement. Penalties include:
- Up to €35 million or 7% of global turnover for banned practices
- Up to €15 million or 3% for non-compliance with high-risk obligations
- Up to €7.5 million or 1% for providing false information
Authorities can also inspect systems, request documentation, and order fixes.
8. What transparency requirements are there?
Key obligations include:
- User notification when interacting with AI
- Labeling of synthetic content and deepfakes
- Disclosure of emotion recognition and biometric categorization use
- Marking AI-generated public content as synthetic
These rules help users identify AI and understand when decisions or content are automated.
9. Will the Act stifle innovation or hurt users?
The Act is designed to balance safety and innovation. It focuses on the most harmful risks while allowing most AI to proceed freely. Innovation is encouraged via sandboxes and clear rules, which could enhance user trust and support safe development.
10. When does the Act come into effect?
The Act was published in June 2024 and entered into force 20 days later. However, most provisions apply starting 2 August 2026. Some parts (like the ban on prohibited practices) began 2 February 2025, and others phase in later. Businesses should aim for compliance ahead of the 2026 deadline.

We track and interpret every development surrounding the EU Artificial Intelligence Act as it happens.