The EU AI Act 2026 compliance landscape is undergoing its most significant shift since the regulation entered into force in August 2024. In March 2026, both the European Parliament and Council aligned on postponing the high-risk AI system obligations beyond the original 2 August 2026 deadline. But here’s the critical detail many businesses are missing: several EU AI Act 2026 compliance requirements are already in effect, and the GPAI obligations that took effect in August 2025 are now enforceable.
As we noted in our 2026 technology trends analysis, AI regulation is one of the defining shifts of this year. In this EU AI Act 2026 compliance guide, we break down what’s changed, what hasn’t, and what your organisation needs to prioritise right now to avoid penalties and stay ahead of the regulatory curve.
What Is the EU AI Act?
Regulation (EU) 2024/1689 — commonly known as the EU AI Act — is the European Union’s landmark legislation governing artificial intelligence. It establishes a risk-based classification system for AI, imposing requirements proportionate to the potential harm an AI system could cause.
The regulation applies to any organisation that places AI systems on the EU market or uses them within the EU, regardless of where the company is headquartered. This means US, Asian, and other non-EU companies serving European customers must also comply.
The Act classifies AI systems into four risk tiers:
- Unacceptable risk: Banned outright (social scoring, real-time biometric surveillance in public spaces, manipulation of vulnerable groups)
- High risk: Subject to mandatory conformity assessments, documentation, human oversight, and registration (e.g., medical diagnostics, recruitment screening, credit scoring)
- Limited risk: Transparency obligations only (chatbots, deepfakes, emotion recognition)
- Minimal risk: No specific requirements (spam filters, video games, basic AI tools)
Additionally, the Act introduces a separate category for General Purpose AI (GPAI) models — systems like GPT, Claude, and Gemini that can be adapted for a wide range of tasks (see our comparison of leading AI assistants). These have their own set of obligations.
Updated Timeline: What Changed in 2026
The original implementation timeline for the EU AI Act was structured as a phased rollout over three years. However, as of April 2026, significant changes are underway.
Original Timeline (Now Partially Revised)
- 2 February 2025: Prohibited AI practices take effect — IN EFFECT
- 2 August 2025: GPAI model obligations and transparency requirements — IN EFFECT
- 2 August 2026: High-risk AI system obligations — BEING POSTPONED
- 2 August 2027: Remaining obligations (embedded high-risk systems) — ALSO DELAYED
What’s Changing
In March 2026, the European Parliament’s IMCO and LIBE committees voted 101-9 in favour of postponing the high-risk AI system deadlines. Both the Parliament and Council have now converged on new fixed dates:
- 2 December 2027: Stand-alone high-risk AI system obligations
- 2 August 2028: Embedded high-risk AI system obligations
The postponement is driven by practical concerns: the harmonised standards that organisations need to demonstrate compliance have not yet been finalised. Without these standards, conducting meaningful conformity assessments is extremely difficult.
Trilogue negotiations between the Parliament, Council, and Commission are currently underway to formalise these new dates. While the exact dates could still shift slightly during negotiations, the direction is clear — the high-risk deadline is moving.
What’s Already in Effect (February 2025)
Despite the postponement of high-risk obligations, several requirements are already enforceable as of 2 February 2025. Organisations that have delayed action because of the postponement news may already be in violation.
The following AI practices are now prohibited in the EU:
- Social scoring: AI systems that evaluate or classify individuals based on social behaviour or personality traits in ways that lead to detrimental treatment
- Real-time remote biometric identification: Use of real-time facial recognition in public spaces (with narrow law enforcement exceptions)
- Predictive policing: AI systems that assess the risk of individuals committing criminal offences based solely on profiling
- Emotion recognition in workplaces and schools: AI that infers emotions from biometric data in employment or education contexts
- Manipulation of vulnerable groups: AI that exploits age, disability, or socio-economic vulnerability to distort behaviour
- Untargeted facial recognition scraping: Building facial recognition databases by indiscriminately scraping images from the internet or CCTV
Violations of these prohibitions carry the highest penalties under the Act — up to €35 million or 7% of global annual turnover, whichever is higher.
GPAI Obligations: Already in Effect Since August 2025
While the high-risk deadline is being postponed, the GPAI model obligations have been in effect since 2 August 2025. This is the requirement that should concern most technology companies right now — and if you haven’t started complying, you may already be in violation.
Under the AI Act, GPAI model providers must:
- Technical documentation: Maintain detailed documentation covering training data, compute, design choices, and evaluation results
- Copyright compliance: Implement a policy to comply with EU copyright law, including the text and data mining opt-out mechanism
- Transparency: Provide sufficient information for downstream providers to integrate the model compliantly
- Risk assessment: For systemic-risk GPAI models (trained with more than 10^25 FLOPs), conduct adversarial testing, incident reporting, and cybersecurity protections
Companies deploying GPAI models — whether they’re fine-tuning an open-source model or building applications on top of commercial APIs — need to understand their obligations as both providers and deployers. The distinction matters: if you substantially modify a GPAI model, you may become a provider with additional responsibilities.
High-Risk Systems: Understanding the Postponement
The postponement of the high-risk AI system deadline is significant, but it doesn’t mean organisations should pause their compliance efforts. Here’s why.
What’s Being Postponed
The obligations under Chapter III of the AI Act — which cover risk management, data governance, technical documentation, transparency, human oversight, accuracy, robustness, and conformity assessment for high-risk systems — will not apply from August 2026 as originally planned.
What’s NOT Being Postponed
The postponement does not affect:
- Prohibited practices (already in effect)
- GPAI obligations (August 2025)
- Transparency obligations for limited-risk systems (August 2025)
- The classification of AI systems — your system is still high-risk regardless of the deadline
Why You Shouldn’t Wait
Even with the extended timeline, we strongly recommend that organisations continue their compliance preparations. Here’s why:
- Standards are coming: The harmonised standards being developed by CEN and CENELEC will define exactly how to achieve compliance. Organisations that wait for these standards will face a last-minute rush.
- Competitive advantage: Companies that can demonstrate AI Act compliance — even before it’s mandatory — will have a significant edge in EU procurement and enterprise sales.
- GDPR overlap: Many AI Act requirements overlap with existing GDPR obligations. If you’re processing personal data in your AI systems, you’re already subject to data protection requirements that the AI Act builds upon. Our guide to online privacy protection covers the intersection of AI and data rights.
- Complex systems take time: High-risk AI systems in healthcare, finance, and critical infrastructure often require 12-18 months of compliance work. Starting in 2027 for a December 2027 deadline is not realistic.
EU AI Act 2026 Compliance Checklist: What to Do Right Now
Based on the current regulatory landscape, here’s our prioritised action plan for European businesses:
Immediate Actions (May 2026)
- Audit all AI systems currently in use and classify them by risk tier
- Identify any prohibited practices and cease them immediately
- Review third-party AI tools and services for compliance gaps
- Appoint an AI compliance lead or team
Since August 2025 — Already in Effect
- If you develop or deploy GPAI models, ensure technical documentation is complete (mandatory since August 2025)
- Implement a copyright compliance policy for training data (mandatory since August 2025)
- Provide transparency information for downstream providers (mandatory since August 2025)
- For systemic-risk models: adversarial testing and incident reporting procedures must be in place (mandatory since August 2025)
Before December 2027 (New High-Risk Deadline)
- Complete conformity assessments for high-risk AI systems
- Finalise technical documentation and risk management systems
- Implement human oversight mechanisms
- Register high-risk systems in the EU database
- Affix CE marking to compliant systems
- Establish post-market monitoring processes
Ongoing
- Monitor trilogue negotiations for finalised dates
- Track CEN/CENELEC harmonised standards development
- Review and update compliance as standards are published
- Train staff on AI Act obligations and prohibited practices
Penalties and Enforcement: What’s at Stake
The EU AI Act carries some of the most significant penalties in technology regulation. National market surveillance authorities are responsible for enforcement, and they’re already building capacity.
- Prohibited practices violations: Up to €35 million or 7% of global annual turnover
- High-risk system violations: Up to €15 million or 3% of global annual turnover
- Incorrect information to authorities: Up to €7.5 million or 1% of global annual turnover
- GPAI model violations: Up to €15 million or 3% of global annual turnover
For SMEs, penalties are capped at the lower of the percentage thresholds or a fixed amount, but the financial impact can still be devastating.
Importantly, the penalties for prohibited practices are already enforceable. Companies using banned AI techniques — even unknowingly — face the highest penalty tier.
Frequently Asked Questions
Is the EU AI Act already in effect?
Yes, partially. The prohibition on certain AI practices has been enforceable since 2 February 2025. GPAI model obligations have been in effect since 2 August 2025. The high-risk system obligations are being postponed from August 2026 to approximately December 2027.
Does the EU AI Act apply to non-EU companies?
Yes. The Act applies to any organisation that places AI systems on the EU market or uses AI systems within the EU, regardless of where the company is based. If you serve European customers with AI-powered products, you must comply.
What counts as a “high-risk” AI system?
High-risk AI systems are defined in Annex III of the Act and include AI used in critical infrastructure, education, employment, essential services, law enforcement, migration, and justice. If your AI system makes or influences decisions that significantly affect people’s lives, it’s likely high-risk.
Should we stop our EU AI Act 2026 compliance preparation because of the postponement?
Absolutely not. The postponement only affects the high-risk system deadline. Prohibited practices are already enforceable, GPAI obligations have been in effect since August 2025, and the compliance process for complex high-risk systems typically takes 12-18 months. Organisations that delay preparation will face a costly last-minute scramble.
What’s the difference between a GPAI provider and a GPAI deployer?
A GPAI provider develops or substantially modifies a general-purpose AI model (like GPT or Claude). A GPAI deployer uses these models in their products or services. Providers have more extensive obligations (documentation, copyright policy, transparency), while deployers must ensure their use complies with the Act’s requirements for their specific application.
How does the AI Act interact with GDPR?
The AI Act and GDPR are complementary. If your AI system processes personal data, you must comply with both. The AI Act builds on GDPR’s data protection principles and adds AI-specific requirements like risk management systems, human oversight, and transparency. In practice, many AI Act compliance measures will also support GDPR compliance.
Conclusion
The EU AI Act’s timeline is evolving, but the direction is unmistakable: comprehensive AI regulation in Europe is happening. The postponement of the high-risk deadline is a practical adjustment, not a rollback. Prohibited practices are already enforceable, GPAI obligations are months away, and the extended timeline for high-risk systems should be used wisely — not wasted.
We recommend that every organisation operating in the EU take three steps immediately: audit your current AI systems, identify any prohibited practices, and begin preparing for the August 2025 GPAI deadline. The companies that treat the postponement as breathing room rather than a pause button will be the ones best positioned when compliance becomes mandatory.
The regulatory landscape will continue to evolve as trilogue negotiations conclude and harmonised standards are published. We’ll keep this guide updated as new information becomes available.



