EU-AI-Act

The EU AI Act adopts a risk-based approach, broadly encompassing all forms of AI. The act sorts AI systems into four levels, from unacceptable to minimal risk. The measures contain both hefty penalties for non-compliance and some indirect incentives – as well as pointers to some direct sources of money. They seem to be trying to fire up a lagging industry and innovation machine, all while safeguarding fundamental rights.

Are they trying to close the proverbial barn door after the chickens have bolted? Or are they counting their horses before they hatch – which they never will, because they’re too busy watching the pot?

Turns out you can’t make an AI omelet without (maybe) breaking some of Schrödinger’s eggs.

(Ok, I know that this is a reach, but there is an essential creative tension between the enforcement and the innovation side of things – and it is interesting reading the legislation with this in mind. And you do have to make mistakes and break things if you are going to lead.)

Status: Finalized (Phased Implementation)

Official Website: https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.html

This is the parliamentary entry as well as the text as adopted. It’s not the clearest, but is the official-est.

Best 3rd Party Website: https://artificialintelligenceact.eu/

That 3rd party site looks really well done – has a nice “what applies to me” tool, seems reasonably non-partisan, or at least non-partisan enough to hide it pretty well.

Timeline:

August 2, 2024: Official entry into force
February 2, 2025: Prohibitions on unacceptable risk AI systems apply
August 2, 2025: Obligations for general-purpose AI models and penalties apply
August 2, 2026: First phase of high-risk AI system regulations apply
August 2, 2027: Second phase of high-risk AI system regulations apply

Key Provisions Relevant to Medium-Sized Enterprises:

The EU AI Act has entered into force on August 1, 2024, with a phased implementation approach:

Risk-Based Framework

The Act maintains its four-tier risk classification, with updated enforcement details:

  1. Unacceptable Risk (Prohibited)**


  2. High Risk


  3. Limited Risk


  4. Minimal Risk


The EU is adopting a risk-based approach, categorizing AI systems into four levels: unacceptable risk (prohibited), high-risk (strictly regulated), limited risk (transparency requirements), and minimal risk (largely unregulated).

The Act imposes strict requirements on high-risk AI systems, including robust risk management, high-quality datasets, transparency, human oversight, and cybersecurity measures. It establishes hefty penalties for non-compliance, up to €35 million or 7% of global annual turnover, whichever is higher. I don’t care who you are – that would sting. To put that in proportion – if you were Google (nice!) you had ~$305 billion turnover in 2023. 7% of that would be ~$2.1 billion.

That’s not the sort of money that you can just shrug off. Sure, it’d take like a hundred years and a million billable lawyer hours to collect on it… (Although, Google had about $8B in fines levied against it by the EU – Google Shopping, Adsense, and Android – of which about half has been upheld… So, maybe not the best example – but if you were some tech company other than Google, say (still nice!). I still assert that those would sting.)

The EU has stumbled a bit off the starting line when it comes to AI – much of this act concerns the incentives for EU companies – particularly around Generative AI. But! Since the compliance side potentially touches everyone who is using a GenAI tool in their business – let’s start with that, as I said, the penalties aren’t insignificant.

EU Risk Levels

Risk LevelExamplesRequirementsCompliance DeadlinesPenalties
Unacceptable Risk (Prohibited)
  • Socal scoring systems by public authorities
  • Biometric categorization systems based on sensitive characteristics
  • "Real-time" remote biometric identification in public space
  • Emotion recognition in workplace/education
  • AI manipulation targeting vulnerabilities
  • Predictive policing based on profiling
  • Complete prohibition
  • Immediate cessation of development/deployment
  • Removal from market
  • February 2 2025Up to €35M or 7% of global annual turnover
    High Risk
  • Critical infrastructure (transport water gas)
  • Educational/vocational assessment
  • Employment (hiring promotion termination)
  • Essential private/public services
  • Law enforcement systems
  • Border control systems
  • Administration of justice
  • Democratic processes
  • Risk management system
  • Data quality management
  • Technical documentation
  • Record keeping
  • Transparency to users
  • Human oversight
  • Accuracy robustness cybersecurity
  • Registration in EU database
  • August 2 2026Up to €15M or 3% of global annual turnover
    Limited Risk
  • Chatbots
  • GenAI systems (e.g. ChatGPT)
  • Deepfake generators
  • AI-enabled personal assistants
  • Emotion recognition systems
  • Biometric categorization
  • Transparency obligations
  • Disclosure of AI nature
  • Clear labeling of AI-generated content
  • Notification when interacting with AI
  • Disclosure of training data sources
  • August 2 2026Up to €7.5M or 1% of global annual turnover
    Minimal Risk
  • AI-enabled video games
  • Spam filters
  • Inventory management systems
  • Basic recommendation systems
  • Customer service prioritization
  • AI in scientific research
  • Voluntary codes of conduct
  • Basic transparency
  • Compliance with existing regulations
  • August 2 2026Subject to general business regulations

    Verticals

    Here’s a “big picture” verticals chart – this will at least give you a ‘finger in the air’ look at how much any given vertical is affected by the EU AI Act, and the timeline for necessary action.

    VerticalRisk LevelTimelineKey Points
    All VerticalsVariesFeb 2 2025
  • Prohibitions on unacceptable risk AI systems apply
  • Affects all AI systems regardless of industry
  • Bans social scoring manipulative AI exploitation of vulnerabilities
  • All VerticalsVariesAug 2 2025
  • Obligations for general-purpose AI models apply
  • Penalties for non-compliance come into effect
  • Affects providers of foundation models and generative AI
  • HealthcareHighAug 2 2026
  • AI for medical devices and diagnostics
  • Patient data management systems
  • AI-driven drug discovery
  • Mental health chatbots
  • Financial ServicesHighAug 2 2026
  • Credit scoring systems
  • Fraud detection AI
  • Robo-advisors
  • AI-driven risk assessment
  • Human ResourcesHighAug 2 2026
  • AI-powered recruitment tools
  • Performance evaluation systems
  • Employee monitoring software
  • EducationHighAug 2 2026
  • Automated grading systems
  • Personalized learning platforms
  • Student performance prediction tools
  • Law EnforcementHighAug 2 2026
  • Predictive policing systems
  • Facial recognition for investigations
  • AI-driven forensic analysis
  • TransportationHighAug 2 2026
  • Autonomous vehicle systems
  • Traffic management AI
  • Predictive maintenance for vehicles
  • ManufacturingMedium to HighAug 2 2026/2027
  • Quality control AI
  • Supply chain optimization
  • Predictive maintenance
  • Industrial robotics
  • RetailMediumAug 2 2027
  • Personalized recommendation engines
  • Inventory management AI
  • Customer service chatbots
  • MarketingMediumAug 2 2027
  • AI-driven ad targeting
  • Customer behavior prediction
  • Content generation AI
  • AgricultureMediumAug 2 2027
  • Crop monitoring and prediction AI
  • Automated farming systems
  • Livestock management AI
  • EntertainmentLow to MediumVaries
  • Content recommendation algorithms
  • AI-generated content
  • Gaming AI
  • General SoftwareLow to High (depends on use)Varies
  • AI development tools
  • Cloud-based AI services
  • General-purpose AI models
  • Updated “General AI” Provisions

    Here’s a link to the 2nd draft of the “General Purpose AI Provisions” – have a read. These are the provisions specific to those who are building “general purpose” AI models. Now, you might think that this is functionally irrelevant for you, as you aren’t the one collecting 6.6 billion US$ in the largest funding round ever to build the next biggest greatest AI. And you are completely okay with that.

    However, while these provisions are largely written for the providers of these models – but, there are things in there that will affect you – and things in there that you can, and should, expect from your vendors, if you’re consuming a general purpose AI API.

    (Spoiler: Point Claude or ChatGPT or whatever you use to this document, tell it about your business and ask it “what’s in here that’s relevant for me?”)

    Now, if you are at all in the business of providing models – even if it is peripheral to your business (like a social media provider allowing people to train a pos/neg/neutral sentiment bot), you might want to keep an eye on this. It’s not terribly hard to track.

    I find two things to be interesting, and one of them to be really relevant – we’ll talk about the other in a min.

    The first are the requirements for the providers of General Purpose AI Models to provide certain things in the name of transparency. I think that this is fascinating and will probably have to wind its way through the courts – do you know what went into GPT-4o? Neither do I, and OpenAI really likes it that way. My suspicion is that there will be lip-service “exact” to the letter compliance that actually ends up telling us almost nothing about what’s under the hood.

    Except for if you are way up high in the government or way under the ground in some secret bunker with gigantic computing power with people who go by first names and only say “hello” when they answer their work phones.

    I’ve had the pleasure of working with some really, really smart people in some crazy data centers. I never got used to being watched all the time.

    Transparency Requirements

    • Mandatory disclosure of AI-generated content (like what you’re reading!)

    • Documentation of training data sources

    • Clear labeling of deepfakes and AI-modified content

    • Incident reporting for serious issues

    Smoothing the Way

    The EU Ai Act does not provide for any direct funding measures. It points to a few like the Digital Europe Programme and Horizon Europe as programs that can potentially provide funding. However, there’s a lot of other things that can be done to smooth out the road to allow innovation.

    Probably the best way to think of this is to realize that a lot of tiny organizations have big ideas – and they have trouble with cleaning data from multiple sources to a coherent specification – the Act aims at this problem along with other data governance problems.

    A big concept that they introduce is the idea of “AI Regulatory Sandboxes” – basically a place where organizations (prioritized to SME’s!) can take their models that they can run them in a controlled, lower-risk environment to make sure that they are compliant before deploying them into public.

    Using AI vs Developing AI

    The last major thing I’d like to bring out is this other set of priorities that you should be aware of. If you are using AI, and if you are developing “General AI” that other companies will be using – check the below timeline for your requirements.

    The first part of the table is the timeline, the second part are the broad brush stroke requirements.

    Key Differences:

    • USERS primarily need to focus on proper usage, oversight, and verification

    • DEVELOPERS need to meet more comprehensive requirements including documentation, conformity assessments, and technical standards

    • DEVELOPERS have access to special support measures like:

      • Simplified technical documentation

      • Priority access to regulatory sandboxes

      • Reduced conformity assessment fees

      • Dedicated guidance channels

    Medium-Sized Business Priority Timeline

    TimelineSMEs USING AISMEs DEVELOPING AI
    Immediate Priorities (Q1-Q2 2025)
  • Audit existing AI systems against risk categories
  • Remove or replace prohibited systems by Feb 2025
  • Verify CE markings and conformity
  • Set up oversight for high-risk systems
  • Begin staff training programs
  • Document all AI systems and risk categories
  • Stop development of prohibited systems
  • Begin GPAI compliance prep
  • Apply for regulatory sandbox access
  • Start conformity assessments
  • Medium-Term Actions (Q3 2025-Q3 2026)
  • Implement transparency measures
  • AI interaction disclosures
  • Content labeling
  • User notifications
  • Complete staff training
  • Document usage procedures
  • Set up monitoring systems
  • Implement transparency requirements
  • Technical documentation
  • Training data summaries
  • Model capabilities/limitations
  • Complete conformity assessments
  • Set up incident reporting
  • Documentation Requirements
  • List of AI systems in use
  • Risk assessment records
  • Usage policies and procedures
  • Staff training records
  • Incident reports
  • Technical documentation
  • Training data summaries
  • Model evaluations
  • Risk assessments
  • Test results
  • Incident reports
  • Training Requirements
  • AI system usage training
  • Risk recognition
  • Incident reporting procedures
  • Oversight protocols
  • Technical compliance training
  • Development standards
  • Risk assessment methods
  • Documentation procedures
  • Compliance Monitoring
  • Regular system audits
  • Usage monitoring
  • Incident tracking
  • User feedback collection
  • System performance monitoring
  • Risk assessment updates
  • Incident tracking
  • Model evaluation
  • Caution and Declarations

    • While this content is intended for businesspeople at medium-sized enterprises, it is probably useful for more than that.

    • In the spirit of the EU and California regulations, I declare that there is a substantial amount of GenAI content here. (I mean, that’s what I’m an expert on – there better be, right?)

    • That content is generated on official pages current as of Jan 18, 2025, so, this isn’t just random GPT-4o training content. I will update periodically.

    • I am so totally not a lawyer.

    • Please refer to the original sites – they are, in general, written surprisingly well with excellent clarity. Color me impressed.

    Related Posts

    Who needs this tech tip?