EU AI Act Compliance Guide: What Companies Need Before the August 2026 Deadline

The EU AI Act is the world’s first comprehensive AI regulation — and its enforcement deadlines are approaching fast. Prohibited AI practices have been enforceable since February 2025, GPAI obligations since August 2025, and the high-risk AI system requirements were originally scheduled for August 2026 (though a postponement to 2027 is being debated). Whether the deadline moves or not, companies need to prepare now. Here’s a practical compliance guide for every organization affected by the AI Act.

What Is the EU AI Act?

The EU AI Act (Regulation 2024/1689) is the world’s first comprehensive, legally binding regulatory framework for artificial intelligence. It entered into force on August 1, 2024, and is being implemented in phases through 2027.

The Act applies to all AI systems placed on the EU market, put into service in the EU, or whose outputs affect people in the EU — regardless of where the provider is established. Like GDPR, it has extraterritorial reach. If your AI affects EU citizens, you must comply.

The core philosophy is a risk-based approach: obligations are proportional to the potential harm an AI system can cause. The higher the risk, the stricter the rules.

The Four Risk Categories

Every AI system falls into one of four risk tiers:

Tier 1: Unacceptable Risk — BANNED (Enforceable Since February 2025)

Eight categories of AI practices are outright prohibited:

  1. Subliminal manipulation — AI that deploys techniques beyond a person’s consciousness to distort behavior
  2. Exploitation of vulnerabilities — AI that targets age, disability, or socio-economic status to manipulate behavior
  3. Social scoring — AI that evaluates people based on social behavior, leading to detrimental treatment
  4. Criminal risk assessment — AI that predicts criminal risk based solely on profiling (with narrow exceptions)
  5. Untargeted facial image scraping — Scraping facial images from the internet or CCTV to build facial recognition databases
  6. Emotion recognition in workplaces and schools — AI that infers emotions in employment or educational settings
  7. Biometric categorization — AI that infers race, religion, political opinion, or sexual orientation from biometric data
  8. Real-time remote biometric identification in public spaces — With narrow exceptions for kidnapping, terrorism, and serious crime

The European Parliament’s March 2026 vote also added a ban on AI “nudifier” systems (deepfake nude generation) as part of the Digital Omnibus amendments.

Tier 2: High Risk — REGULATED (Original Deadline: August 2026)

High-risk AI systems are those that could significantly affect people’s lives. They fall into two categories:

  • Annex I products: AI that is a safety component of regulated products (machinery, medical devices, toys, etc.)
  • Annex III use cases: Eight specific categories including biometric identification, critical infrastructure, education, employment, access to services, law enforcement, migration, and justice

High-risk AI systems must comply with extensive requirements: risk management, data governance, technical documentation, human oversight, accuracy and robustness, and CE marking. We’ll cover these in detail below.

Tier 3: Limited Risk — TRANSPARENCY (Enforceable Since August 2025)

Limited-risk AI systems must comply with transparency obligations:

  • Chatbots: Users must be informed they’re interacting with an AI system
  • Deepfakes: AI-generated images, audio, or video must be disclosed as artificially generated
  • Emotion recognition: Deployers must disclose when emotion recognition is being used
  • Biometric categorization: Deployers must disclose the operation of biometric categorization systems

Tier 4: Minimal/No Risk — NO SPECIFIC OBLIGATIONS

AI systems not falling into any of the above categories (spam filters, video games, basic inventory management) have no specific regulatory obligations under the AI Act. Voluntary codes of conduct are encouraged.

The August 2026 Deadline: What’s Happening?

The original timeline required high-risk AI system obligations to become enforceable on August 2, 2026. However, the European Commission proposed a postponement as part of its Digital Omnibus simplification package in November 2025.

What’s Already in Force (April 2026)

  • Prohibited practices (Article 5) — since February 2, 2025
  • GPAI model obligations (Articles 51-56) — since August 2, 2025
  • AI literacy requirements (Article 4) — since August 2, 2025
  • Transparency obligations (Article 50) — since August 2, 2025
  • GPAI Code of Practice — since August 2, 2025

What’s Pending (Subject to Postponement Debate)

  • High-risk AI system obligations — originally August 2026, now debated for postponement to 2027
  • Annex I product-related high-risk AI — originally August 2027
  • AI-generated text labelling (machine-readable watermarking) — August 2026 or later

The Postponement Debate

In November 2025, the European Commission proposed delaying high-risk AI obligations to 2027, arguing that key harmonised standards and compliance tools weren’t ready. In March 2026, the European Parliament’s IMCO and LIBE committees voted to support the postponement with modifications, adding a ban on AI nudifier systems and fixed application dates.

The original August 2026 deadline remains legally in force until the amendment is formally adopted. Companies should prepare for both scenarios: compliance by August 2026 OR compliance by 2027. The postponement only affects the enforcement date, not the requirements themselves.

High-Risk AI: What Companies Must Do

If your AI system is classified as high-risk, you must comply with Articles 8-15 of the AI Act. Here’s what that means in practice:

Provider Obligations (If You Develop High-Risk AI)

  1. Risk management system (Art. 9) — Continuous identification, analysis, and mitigation of risks throughout the system’s lifecycle
  2. Data governance (Art. 10) — Training, validation, and testing datasets must meet quality criteria: examined for biases, representativeness, and statistical properties
  3. Technical documentation (Art. 11) — Comprehensive documentation including system design, training data, performance metrics, and risk assessments
  4. Record-keeping and logging (Art. 12) — Automatic logging of events for traceability; logs retained for appropriate periods
  5. Transparency to deployers (Art. 13) — Instructions for use including intended purpose, accuracy levels, known risks, and human oversight measures
  6. Human oversight (Art. 14) — Systems must be designed so humans can understand outputs, manually override, or stop the system
  7. Accuracy, robustness, and cybersecurity (Art. 15) — Appropriate levels of accuracy, resilience against errors and adversarial attacks

Deployer Obligations (If You Use High-Risk AI)

  1. Use the system according to its instructions for use
  2. Assign competent human oversight persons
  3. Ensure input data is relevant and sufficiently representative
  4. Monitor the system’s operation and report serious incidents
  5. Retain logs for at least 6 months (or longer per sectoral law)
  6. Conduct a Data Protection Impact Assessment (DPIA) under GDPR where applicable
  7. Perform a fundamental rights impact assessment before deploying high-risk AI for public services

Conformity Assessment and CE Marking

Before placing a high-risk AI system on the market, providers must conduct a conformity assessment. For most Annex III systems, self-assessment is permitted (Annex VI). For biometric identification systems, third-party assessment by a notified body is required (Annex VII). After successful assessment, issue an EU Declaration of Conformity and affix the CE marking.

All high-risk AI systems must be registered in the EU AI Database before market placement.

GPAI Rules: What OpenAI, Google, and Anthropic Must Do

General-Purpose AI (GPAI) models like GPT-5.4, Claude, and Gemini have specific obligations under Articles 51-56, enforceable since August 2025.

All GPAI Providers Must:

  • Maintain comprehensive technical documentation (training data, computing resources, design, testing results)
  • Implement a copyright compliance policy, including the EU Text and Data Mining exception
  • Provide a sufficiently detailed summary of training data content
  • Share documentation with downstream providers who incorporate the GPAI model

Systemic-Risk GPAI (Additional Obligations)

A GPAI model is classified as systemic risk if trained using >10^25 FLOPs, or if designated by the Commission. Systemic-risk GPAI providers must also:

  • Perform adverse impact assessments to identify systemic risks
  • Take appropriate measures to mitigate identified risks
  • Report serious incidents to the EU AI Office without delay
  • Ensure adequate cybersecurity protection
  • Comply with the EU Code of Practice for GPAI models

Penalties: Up to €35 Million or 7% of Global Turnover

The AI Act’s penalty regime is among the most severe in EU regulatory history — significantly higher than GDPR:

Tier Violation Type Maximum Fine
Tier 1 (Most severe) Violations of prohibited AI practices (Art. 5) €35 million or 7% of global annual turnover
Tier 2 Violations of GPAI obligations (Art. 51-56) €15 million or 3% of global annual turnover
Tier 3 All other violations (high-risk, transparency, etc.) €7.5 million or 1.5% of global annual turnover

For comparison, GDPR’s maximum is €20 million or 4% of global turnover. The AI Act’s Tier 1 penalties are 75% higher than GDPR’s maximum.

For SMEs and start-ups, fines are capped at the lower of the percentage or fixed amount, subject to a proportionality assessment. Authorities must consider the degree of responsibility, intent, and mitigating factors.

Practical Compliance Checklist: What to Do Now

Whether the August 2026 deadline is postponed or not, companies should start preparing immediately. A 12-18 month compliance infrastructure build is typical. Here’s our 5-phase compliance roadmap:

Phase 1: AI System Inventory (Months 1-2)

  • Map all AI systems in your organization — every tool, model, or system used or developed
  • Classify each system by risk tier (unacceptable, high, limited, minimal)
  • Identify your role for each system: Provider, Deployer, Importer, or Distributor
  • Flag any prohibited uses — immediately cease any AI practices banned under Article 5
  • Document the inventory with version numbers, deployment dates, and responsible owners

Phase 2: Risk Classification (Months 2-3)

  • Screen for prohibited practices (Article 5) — social scoring, manipulative AI, untargeted facial scraping, emotion recognition in workplaces/schools
  • Identify Annex III high-risk systems — check all 8 categories against your AI inventory
  • Check Annex I product coverage — if your AI is a safety component of a regulated product
  • Identify limited-risk systems — chatbots, deepfake generators, emotion recognition tools
  • Determine GPAI model status — if you develop or integrate foundation models

Phase 3: Compliance Gap Analysis (Months 3-5)

For each high-risk AI system, assess compliance with Articles 8-15:

  • Risk management system (Art. 9) — Is there a continuous risk identification process?
  • Data governance (Art. 10) — Are datasets examined for bias, quality, and representativeness?
  • Technical documentation (Art. 11) — Is there comprehensive documentation per Annex IV?
  • Logging capability (Art. 12) — Does the system automatically log relevant events?
  • Transparency (Art. 13) — Are instructions for use clear and complete?
  • Human oversight (Art. 14) — Can humans effectively oversee, understand, and override?
  • Accuracy, robustness, cybersecurity (Art. 15) — Are these measured and maintained?

Phase 4: Implementation (Months 5-10)

  • Build compliance documentation per Annex IV for each high-risk system
  • Implement continuous risk management processes
  • Establish data governance framework — dataset quality assurance, bias testing
  • Design human oversight mechanisms — override capabilities, stop buttons
  • Set up logging infrastructure with appropriate retention
  • Prepare instructions for use for deployers
  • Conduct conformity assessment — self-assessment (Annex VI) or third-party (Annex VII)
  • Draft EU Declaration of Conformity

Phase 5: Registration and Ongoing Compliance (Months 10-12+)

  • Register in the EU AI Database (Article 49) before market placement
  • Affix CE marking on high-risk AI systems
  • Establish post-market monitoring (Articles 61-62)
  • Set up serious incident reporting — report to authorities within 15 days
  • Appoint an EU representative (Article 54) if not established in the EU
  • Conduct AI literacy training (Article 4) for all staff who use or oversee AI systems
  • If developing GPAI models: technical documentation, copyright policy, training data summary

Sector-Specific Impact

Healthcare

AI for medical diagnosis, triage, and treatment recommendations is high-risk. Medical device regulations (MDR) overlap — AI in medical devices must comply with both the AI Act and MDR. Third-party conformity assessment by a notified body is required.

Finance and Insurance

Credit scoring and insurance pricing AI is explicitly high-risk under Annex III. Anti-discrimination testing, documentation of model decisions, and human oversight of automated lending decisions are mandatory. Must also comply with GDPR and anti-discrimination law.

Employment and HR

AI for recruitment, screening, performance evaluation, and termination decisions is high-risk. Human oversight is mandatory, bias testing is required, and emotion recognition in the workplace is prohibited (Article 5).

Law Enforcement

Polygraph testing, recidivism risk assessment, and crime analytics AI is high-risk. Real-time remote biometric identification in public spaces is prohibited with narrow exceptions. Strict human oversight and fundamental rights impact assessments are required.

How Does This Compare to Other AI Regulations?

Region Approach Status (April 2026) Key Difference
EU Comprehensive, risk-based, legally binding In force, phased implementation Most comprehensive; extraterritorial reach
US Fragmented, sector-specific No federal AI law; state patchwork Voluntary guidelines + state laws (Colorado AI Act)
China State-led, content-focused Multiple targeted regulations Focus on content control and social stability
UK Pro-innovation, regulator-led No AI-specific legislation Relies on existing regulators; no new statutory obligations
South Korea Comprehensive framework AI Basic Act passed 2025 Second comprehensive AI law after EU

The Bottom Line

The EU AI Act is the most significant AI regulation in history, and its penalties are severe — up to €35 million or 7% of global turnover for violations of prohibited practices. Whether the August 2026 deadline is postponed to 2027 or not, the obligations remain the same. Companies that delay compliance preparations risk scrambling when enforcement begins.

Our advice: Start your AI inventory and risk classification now. The 5-phase compliance roadmap above is designed to be completed in 12 months. If you’re a deployer of high-risk AI, your obligations are lighter but still significant. If you’re a provider, the documentation and conformity assessment requirements are extensive.

The AI Act is not going away. The question is not whether to comply, but whether you’ll be ready when enforcement begins.

Continue reading: