The EU AI Act adopts a risk-based approach, broadly encompassing all forms of AI. The act sorts AI systems into four levels, from unacceptable to minimal risk. The measures contain both hefty penalties for non-compliance and some indirect incentives – as well as pointers to some direct sources of money. They seem to be trying to fire up a lagging industry and innovation machine, all while safeguarding fundamental rights.
Are they trying to close the proverbial barn door after the chickens have bolted? Or are they counting their horses before they hatch – which they never will, because they’re too busy watching the pot?
Turns out you can’t make an AI omelet without (maybe) breaking some of Schrödinger’s eggs.
(Ok, I know that this is a reach, but there is an essential creative tension between the enforcement and the innovation side of things – and it is interesting reading the legislation with this in mind. And you do have to make mistakes and break things if you are going to lead.)
Status: Finalized (Phased Implementation)
Official Website: https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.html
This is the parliamentary entry as well as the text as adopted. It’s not the clearest, but is the official-est.
Best 3rd Party Website: https://artificialintelligenceact.eu/
That 3rd party site looks really well done – has a nice “what applies to me” tool, seems reasonably non-partisan, or at least non-partisan enough to hide it pretty well.
Timeline:
August 2, 2024: Official entry into force
February 2, 2025: Prohibitions on unacceptable risk AI systems apply
August 2, 2025: Obligations for general-purpose AI models and penalties apply
August 2, 2026: First phase of high-risk AI system regulations apply
August 2, 2027: Second phase of high-risk AI system regulations apply
Key Provisions Relevant to Medium-Sized Enterprises:
The EU AI Act has entered into force on August 1, 2024, with a phased implementation approach:
Risk-Based Framework
The Act maintains its four-tier risk classification, with updated enforcement details:
Unacceptable Risk (Prohibited)**
High Risk
Limited Risk
Minimal Risk
The EU is adopting a risk-based approach, categorizing AI systems into four levels: unacceptable risk (prohibited), high-risk (strictly regulated), limited risk (transparency requirements), and minimal risk (largely unregulated).
The Act imposes strict requirements on high-risk AI systems, including robust risk management, high-quality datasets, transparency, human oversight, and cybersecurity measures. It establishes hefty penalties for non-compliance, up to €35 million or 7% of global annual turnover, whichever is higher. I don’t care who you are – that would sting. To put that in proportion – if you were Google (nice!) you had ~$305 billion turnover in 2023. 7% of that would be ~$2.1 billion.
That’s not the sort of money that you can just shrug off. Sure, it’d take like a hundred years and a million billable lawyer hours to collect on it… (Although, Google had about $8B in fines levied against it by the EU – Google Shopping, Adsense, and Android – of which about half has been upheld… So, maybe not the best example – but if you were some tech company other than Google, say (still nice!). I still assert that those would sting.)
The EU has stumbled a bit off the starting line when it comes to AI – much of this act concerns the incentives for EU companies – particularly around Generative AI. But! Since the compliance side potentially touches everyone who is using a GenAI tool in their business – let’s start with that, as I said, the penalties aren’t insignificant.
EU Risk Levels
Risk Level | Examples | Requirements | Compliance Deadlines | Penalties |
---|---|---|---|---|
Unacceptable Risk (Prohibited) | February 2 2025 | Up to €35M or 7% of global annual turnover | ||
High Risk | August 2 2026 | Up to €15M or 3% of global annual turnover | ||
Limited Risk | August 2 2026 | Up to €7.5M or 1% of global annual turnover | ||
Minimal Risk | August 2 2026 | Subject to general business regulations |
Verticals
Here’s a “big picture” verticals chart – this will at least give you a ‘finger in the air’ look at how much any given vertical is affected by the EU AI Act, and the timeline for necessary action.
Vertical | Risk Level | Timeline | Key Points |
---|---|---|---|
All Verticals | Varies | Feb 2 2025 | |
All Verticals | Varies | Aug 2 2025 | |
Healthcare | High | Aug 2 2026 | |
Financial Services | High | Aug 2 2026 | |
Human Resources | High | Aug 2 2026 | |
Education | High | Aug 2 2026 | |
Law Enforcement | High | Aug 2 2026 | |
Transportation | High | Aug 2 2026 | |
Manufacturing | Medium to High | Aug 2 2026/2027 | |
Retail | Medium | Aug 2 2027 | |
Marketing | Medium | Aug 2 2027 | |
Agriculture | Medium | Aug 2 2027 | |
Entertainment | Low to Medium | Varies | |
General Software | Low to High (depends on use) | Varies |
Updated “General AI” Provisions
Here’s a link to the 2nd draft of the “General Purpose AI Provisions” – have a read. These are the provisions specific to those who are building “general purpose” AI models. Now, you might think that this is functionally irrelevant for you, as you aren’t the one collecting 6.6 billion US$ in the largest funding round ever to build the next biggest greatest AI. And you are completely okay with that.
However, while these provisions are largely written for the providers of these models – but, there are things in there that will affect you – and things in there that you can, and should, expect from your vendors, if you’re consuming a general purpose AI API.
(Spoiler: Point Claude or ChatGPT or whatever you use to this document, tell it about your business and ask it “what’s in here that’s relevant for me?”)
Now, if you are at all in the business of providing models – even if it is peripheral to your business (like a social media provider allowing people to train a pos/neg/neutral sentiment bot), you might want to keep an eye on this. It’s not terribly hard to track.
I find two things to be interesting, and one of them to be really relevant – we’ll talk about the other in a min.
The first are the requirements for the providers of General Purpose AI Models to provide certain things in the name of transparency. I think that this is fascinating and will probably have to wind its way through the courts – do you know what went into GPT-4o? Neither do I, and OpenAI really likes it that way. My suspicion is that there will be lip-service “exact” to the letter compliance that actually ends up telling us almost nothing about what’s under the hood.
Except for if you are way up high in the government or way under the ground in some secret bunker with gigantic computing power with people who go by first names and only say “hello” when they answer their work phones.
I’ve had the pleasure of working with some really, really smart people in some crazy data centers. I never got used to being watched all the time.
Transparency Requirements
Mandatory disclosure of AI-generated content (like what you’re reading!)
Documentation of training data sources
Clear labeling of deepfakes and AI-modified content
Incident reporting for serious issues
Smoothing the Way
The EU Ai Act does not provide for any direct funding measures. It points to a few like the Digital Europe Programme and Horizon Europe as programs that can potentially provide funding. However, there’s a lot of other things that can be done to smooth out the road to allow innovation.
Probably the best way to think of this is to realize that a lot of tiny organizations have big ideas – and they have trouble with cleaning data from multiple sources to a coherent specification – the Act aims at this problem along with other data governance problems.
A big concept that they introduce is the idea of “AI Regulatory Sandboxes” – basically a place where organizations (prioritized to SME’s!) can take their models that they can run them in a controlled, lower-risk environment to make sure that they are compliant before deploying them into public.
Using AI vs Developing AI
The last major thing I’d like to bring out is this other set of priorities that you should be aware of. If you are using AI, and if you are developing “General AI” that other companies will be using – check the below timeline for your requirements.
The first part of the table is the timeline, the second part are the broad brush stroke requirements.
Key Differences:
USERS primarily need to focus on proper usage, oversight, and verification
DEVELOPERS need to meet more comprehensive requirements including documentation, conformity assessments, and technical standards
DEVELOPERS have access to special support measures like:
-
Simplified technical documentation
Priority access to regulatory sandboxes
Reduced conformity assessment fees
Dedicated guidance channels
Medium-Sized Business Priority Timeline
Timeline | SMEs USING AI | SMEs DEVELOPING AI |
---|---|---|
Immediate Priorities (Q1-Q2 2025) | ||
Medium-Term Actions (Q3 2025-Q3 2026) | ||
Documentation Requirements | ||
Training Requirements | ||
Compliance Monitoring |
Caution and Declarations
While this content is intended for businesspeople at medium-sized enterprises, it is probably useful for more than that.
In the spirit of the EU and California regulations, I declare that there is a substantial amount of GenAI content here. (I mean, that’s what I’m an expert on – there better be, right?)
That content is generated on official pages current as of Jan 18, 2025, so, this isn’t just random GPT-4o training content. I will update periodically.
I am so totally not a lawyer.
Please refer to the original sites – they are, in general, written surprisingly well with excellent clarity. Color me impressed.