What is the EU AI Act?
The EU AI Act is the world's first comprehensive AI regulation, adopted in 2024 with phased enforcement through 2026. It establishes a risk-based framework that classifies AI systems into four categories — unacceptable risk, high risk, limited risk, and minimal risk — with corresponding obligations.
For businesses, this means understanding which category your AI systems fall into and meeting the relevant compliance requirements. The Act applies to anyone placing AI systems on the EU market or whose AI outputs affect EU citizens.
Risk Classifications
Unacceptable risk: Banned outright. Includes social scoring, real-time biometric surveillance in public spaces, and AI that exploits vulnerabilities of specific groups.
High risk: Heavily regulated. Includes AI in recruitment, credit scoring, education, healthcare diagnostics, and critical infrastructure. Requires conformity assessments, documentation, human oversight, and ongoing monitoring.
Limited risk: Transparency obligations. Chatbots must disclose they are AI. AI-generated content must be labelled. Emotion recognition systems must inform users.
Minimal risk: No specific obligations. Most business AI applications — document processing, content generation, data analysis — fall here. However, general best practices still apply.
Impact on UK Businesses
Post-Brexit, the EU AI Act does not automatically apply to UK businesses. However, it applies if your AI systems are used by or affect EU citizens, you place AI products on the EU market, or your AI output is used in the EU. Given the interconnected nature of business, most UK companies with European operations need to comply.
The UK is developing its own AI regulatory framework through a sector-specific approach rather than a single act. The AI Safety Institute provides guidance, and existing regulators (FCA, Ofcom, ICO) are integrating AI oversight into their remits.
Compliance Requirements
For high-risk AI systems, you must: maintain a risk management system, ensure training data quality and documentation, provide transparency to users, enable human oversight, ensure accuracy and robustness, and maintain detailed technical documentation.
For general-purpose AI models (foundation models), providers must: document training processes, comply with copyright law, publish training content summaries, and for powerful models, conduct adversarial testing and report serious incidents.
Documentation Needs
Documentation is the backbone of compliance. You need: system descriptions, risk assessments, data governance records, testing and validation results, human oversight procedures, and post-market monitoring plans. The good news is that well-run AI projects already produce most of this documentation as part of good engineering practice.
Preparation Checklist
1. Inventory your AI systems. List every AI tool, model, and system in use across your organisation. Include third-party AI services.
2. Classify by risk level. Map each system against the Act's risk categories. Most business AI will be minimal or limited risk.
3. Assess gaps. For high-risk systems, audit your current documentation, testing, and monitoring against the Act's requirements.
4. Update contracts. Review agreements with AI providers to ensure they meet transparency and data handling requirements.
5. Train your team. Ensure staff understand their obligations, particularly around human oversight and incident reporting.
6. Establish monitoring. Set up ongoing monitoring for bias, drift, and performance degradation in your AI systems.