The EU AI Act: What Your Business Needs to Know

A plain-language compliance guide for organisations using or building AI

A new legal framework for AI is now in force across Europe

The EU AI Act is the world’s first comprehensive AI regulation. It applies to any business using or developing AI that operates in, sells to, or processes data from the EU — regardless of where you’re based. Non-compliance carries fines of up to €35 million or 7% of global annual turnover, whichever is higher.

You use AI tools in your work

You have obligations as a deployer

You build or sell AI products

You have obligations as a provider

Both roles carry legal weight. Your supplier’s compliance does not remove your own obligations.

The Act classifies all AI into four risk tiers. Your obligations depend on where your tools sit.

Unacceptable Risk

Banned

These systems are prohibited outright. No business may use or deploy them. Includes: AI that manipulates people through subliminal techniques, exploits vulnerabilities in age or disability, performs social scoring, or enables real-time mass biometric surveillance in public spaces. If a vendor is offering any of these capabilities, do not use them.

High Risk

Regulated

Significant compliance obligations apply. Covers AI used in: hiring and recruitment, employee performance monitoring, education access decisions, credit and insurance assessments, emergency service triage, law enforcement, and border control. Providers must implement risk management systems, data governance, technical documentation, human oversight mechanisms, and automatic logging. Deployers must follow provider instructions, keep usage records, and monitor for issues. Obligations apply from August 2026.

Limited Risk

Transparency required

Applies now. If your business uses a chatbot, AI-generated summaries, synthetic voices, or any AI-generated content presented to customers or employees — you must clearly disclose that AI is involved. This is not optional and does not require the system to be high-risk. Check your website, customer comms, and any AI-assisted tools that produce outputs visible to others.

Minimal Risk

Largely unregulated

Spam filters, basic recommendation engines, AI features in standard software. No specific obligations under the Act at present — though this category is narrowing as AI capabilities expand. Still worth logging these tools as part of your internal audit.

Most businesses operate in High or Limited Risk territory. If your AI touches recruitment, performance management, customer decisions, or automated communications — assume obligations apply. When in doubt, classify up.

Where to start — your three-part compliance foundation

Pillar 1

Classify

Identify every AI tool your business uses or builds — including third-party software with AI features, not just tools purchased specifically as AI. Map each one to the four risk tiers. Document your reasoning in writing. Regulators expect evidence that you assessed your systems even if you conclude they are low risk. This audit is your starting point for everything else.

List every tool in your tech stack. For each one, ask: does this make or inform a decision about a person? If yes, it is probably High Risk.

Pillar 2

Govern

For high-risk systems: implement human oversight, maintain usage logs, run data quality checks, and ensure technical documentation exists. For limited-risk systems: add clear AI disclosure notices to every relevant user touchpoint. For most SMEs, human oversight means a named person who reviews AI outputs before they affect a customer or employee decision — it does not require a dedicated compliance team.

Pillar 3

Monitor

Compliance is lifecycle-based — not a one-time task. Build a process to track changes to AI tools you use, monitor outputs for errors or bias, and escalate serious incidents. A serious incident under the Act includes any AI output that causes harm to health, safety, or fundamental rights, or results in significant property damage. You need a process to log and report these to the relevant authority without undue delay.

Using a general-purpose AI model or building on top of one? Foundation model providers (e.g. large language model developers) carry their own obligations under the Act. But if you’re deploying one in a professional context — building a product on top of it, or using it in workflows that touch regulated use cases — you still carry your own risk exposure as a deployer. Assess it independently.