The essentials in 30 seconds

  • The AI Act is the first comprehensive global regulation on artificial intelligence
  • Prohibited practices have applied since February 2025 (social scoring, manipulation...)
  • GPAI obligations (models like ChatGPT, Claude) have applied since August 2025
  • High-risk systems will be regulated from August 2026
  • Penalties up to 35M EUR or 7% of global turnover

The European Union adopted in April 2024 the regulation on artificial intelligence, commonly called AI Act. This historic legislation establishes the first comprehensive legal framework in the world to govern the development and use of AI. For businesses, understanding and anticipating these obligations is no longer optional - it's a strategic necessity.

Application timeline: where are we now?

The AI Act adopts a progressive approach. Not all obligations come into force simultaneously, giving businesses time to prepare.

August 1, 2024
Regulation entry into force
February 2, 2025
Prohibited practices + AI literacy obligation
August 2, 2025
GPAI obligations (general-purpose models) + governance
December 2025
Whistleblower reporting tool launched by the Commission
August 2, 2026
High-risk AI systems (Annex III)
August 2, 2027
High-risk AI in regulated products + existing GPAI compliance

Possible postponement

In November 2025, the European Commission proposed via the "Digital Omnibus" to postpone certain deadlines to December 2027. However, prohibited practices and GPAI obligations remain in force.

The four risk levels

The AI Act classifies artificial intelligence systems into four categories according to their risk level. This proportionate approach aims not to hinder innovation while protecting fundamental rights.

1. Unacceptable risk: prohibited practices

Since February 2, 2025, these AI uses are purely and simply prohibited in Europe:

Prohibited practice Description
Social scoring Rating individuals based on their social behavior or personal characteristics
Subliminal manipulation Techniques exploiting psychological vulnerabilities to influence behaviors
Emotion recognition Detecting emotions in the workplace or in educational institutions
Biometric scraping Untargeted collection of facial images from the Internet to create databases
Predictive policing Predicting criminality based solely on profiling
Real-time biometric identification In public places (except strictly regulated exceptions for law enforcement)

2. High risk: documentation and compliance

High-risk AI systems are listed in Annex III of the regulation. They concern areas where AI can have a significant impact on people's rights:

  • Biometrics: Identification, categorization, emotion recognition
  • Critical infrastructure: Traffic management, energy, water
  • Education: Access to institutions, assessment, cheating detection
  • Employment: Recruitment, promotion, employee surveillance
  • Essential services: Credit scoring, insurance, social benefits
  • Law enforcement: Lie detection, profiling
  • Migration: Border control, asylum applications

For these systems, providers must implement:

  • A risk management system
  • Training data governance
  • Complete technical documentation
  • Traceability mechanisms
  • Appropriate human oversight
  • CE marking after conformity assessment

3. Limited risk: transparency required

Certain AI systems present limited risk but require a transparency obligation:

  • Chatbots: The user must know they are interacting with an AI, not a human
  • Deepfakes: Synthetic content imitating real people must be clearly marked
  • Categorization systems: Information about how they work
  • Content generation: Mention that content was generated by AI

Impact on marketing chatbots

  • All chatbots on your websites must inform visitors they are interacting with an AI
  • A visible mention at the start of the conversation is sufficient
  • Virtual customer service assistants are concerned

4. Minimal risk: no specific obligation

The vast majority of AI systems (spam filters, video games, basic productivity assistants...) present minimal risk and are not subject to any particular obligation.

GPAI obligations: ChatGPT, Claude, Gemini concerned

General-purpose AI models (GPAI - General Purpose AI) are subject to a specific regime. Since August 2, 2025, providers of these models must comply with transparency and documentation obligations.

Who is concerned?

Provider Models concerned
OpenAI GPT-5, GPT-4o, DALL-E
Anthropic Claude Opus 4.5, Sonnet, Haiku
Google Gemini 3.0, PaLM
Meta Llama 3.3
Mistral Mistral Large, Mixtral

GPAI provider obligations

  • Technical documentation: Architecture, training process, energy consumption
  • Usage policy: Authorized uses, restrictions, prohibitions
  • Training data summary: Compliance with copyright
  • Instructions for deployers: Integration guide and best practices

The case of systemic risk models

GPAI models exceeding the threshold of 10^25 FLOPs (training compute power) are considered to present systemic risk. They must additionally:

  • Conduct model evaluations and adversarial testing
  • Document and report serious incidents
  • Ensure cybersecurity of the model and its infrastructure
  • Notify the Commission within 2 weeks if the threshold is reached

GPT-5, Claude Opus 4.5, and Gemini 3.0 Pro likely fall into this category.

Penalties: deterrent amounts

Type of infringement Maximum fine
Prohibited practices (Article 5) 35M EUR or 7% of global turnover
Non-compliance high-risk systems 15M EUR or 3% of global turnover
Incorrect information to authorities 7.5M EUR or 1% of global turnover

For SMEs, reduced amounts are provided: the lower of the flat-rate fine and the turnover percentage applies.

How to achieve compliance?

Step 1: Map your AI systems

Essential first step: inventory all tools using AI in your organization:

  • Business software with AI features
  • Plugins and extensions (ChatGPT in your tools)
  • Cloud services and third-party APIs
  • Chatbots on your websites
  • Marketing automation tools
  • HR solutions (CV sorting, assessment...)

Step 2: Classify risks

For each identified system, determine its risk category according to the AI Act. Systems involving employment, credit, or security are generally high-risk.

Step 3: Document

For concerned systems, build a file including:

  • System description and objectives
  • Data used and its provenance
  • Performance metrics
  • Identified limitations and biases
  • Human oversight measures

Step 4: Inform and train

  • Update legal notices on your chatbots
  • Inform employees of AI use in HR processes
  • Train teams on responsible AI issues
  • Designate an AI compliance officer

Impact on AI visibility and digital marketing

The AI Act has direct implications for AEO and GEO strategies:

  • Marketing chatbots: Obligation to inform visitors
  • AI-generated content: Potential marking obligation
  • Compliance as a trust signal: Search engines and AI could favor responsible players
  • Algorithmic transparency: Documentation of recommendation systems

Compliance as competitive advantage

  • Transform regulatory constraint into a badge of seriousness
  • Position yourself as a responsible AI player
  • Anticipate compliance criteria in tenders
  • Search engines could favor compliant sites

FAQ: Frequently asked questions about the AI Act

My company uses ChatGPT. Am I concerned by the AI Act?
As a deployer (user) of an AI system, you have limited obligations. You mainly need to ensure that use complies with the provider's terms and inform affected persons if AI impacts their rights (e.g., HR decisions). It's OpenAI, as the provider, that bears most GPAI obligations.
Do startups benefit from exemptions?
The AI Act provides for regulatory sandboxes allowing startups and SMEs to test their innovations in a lighter framework. Additionally, penalty amounts are capped lower for small structures.
Does my customer service chatbot need to be modified?
Yes. Since February 2025, all chatbots must clearly inform users that they are interacting with an AI. A mention at the start of the conversation ("I am a virtual assistant powered by AI") is generally sufficient.
Does the AI Act apply to companies outside the EU?
Yes. Like GDPR, the AI Act has extraterritorial reach. It applies to non-European companies as soon as their AI systems are used in the EU or their outputs affect persons located in the EU.

Check your visibility on AI

Discover how your brand appears on ChatGPT, Claude, Gemini, and Perplexity

Start free audit