EU AI Act | Simplified for everyone

Understand EU AI regulations and how to prepare for them.

In partnership with

Join the live session: automate compliance & streamline security reviews

Whether you’re starting or scaling your company’s security program, demonstrating top-notch security practices and establishing trust is more important than ever.

Vanta automates compliance for SOC 2, ISO 27001, and more, saving you time and money — while helping you build customer trust.

And, you can streamline security reviews by automating questionnaires and demonstrating your security posture with a customer-facing Trust Center, all powered by Vanta AI.

Read Time: 6 mins

Guardians,

It’s about time we talk about … actually ‘guarding’ AI 🙏

By that I mean, EU AI regulations.

It’s coming.

By mid-2025, every business in EU or serving EU customers will have the inquisition at their door.

Before that happens

Here’s a quick rundown of today’s edition:

  • What is the EU AI Act?

  • Who it applies to

  • Who’s exempted

  • What are the risk categories, and

  • What to do if you’re concerned

Let’s roll.

What is the EU AI Act?

A set of rules about artificial intelligence (AI) in the European Union (EU), created to make sure everyone follows the same guidelines for using AI. Proposed in April 2021 and officially approved in May 2024.

Under the Act, EU will form a board to help countries work together and follow the rules. But this board is just the “Overseer”.

Every EU country will have its own local governing body actually doing the regulating (exactly like NIS-2). So, if you’re in EU, keep an eye out for local news.

Just like GDPR, it also applies to companies outside the EU if they have users in the EU. Given the local body thing above, that means if you’re serving multiple countries in the EU, you gotta keep track of multiple news channels.

Who does it (AI Act) apply to

Simply put, it applies to all types of AI in many different sectors.

Exceptions:

  • AI used only for military

  • National security

  • Research, and

  • Personal/non-professional purposes (you can still make your anime waifus!)

Basically, if you’re a business that uses AI anywhere in the business — not just in your product or service line; it includes your employees using AI to be more productive or quell loneliness — the regulations are for you.

They call it “professional context”.

And like I said above, it covers both EU-based and non-EU companies if they have users in the EU.

💡 Note: There are special rules for generative AI systems, like ChatGPT. More on that in later issues.

Risk categories in EU AI Act

The heart of the EU AI act are the Risk Categories. Here’s a summary:

Unacceptable Risk: 🚫

These AI applications are banned.

Examples include AI that manipulates behavior, real-time biometric identification in public, and social scoring.

High Risk: ⚠️

These AI applications pose significant threats to health, safety, or fundamental rights.

Examples include AI in health, education, recruitment, critical infrastructure, and law enforcement.

They must follow strict quality, transparency, and safety rules and require a "Fundamental Rights Impact Assessment" before use.

General-Purpose AI: 💡

This includes foundation models like ChatGPT.

They must meet transparency requirements, and high-impact models undergo thorough evaluation.

More on this in later issues.

Limited Risk: 🔍

These AI systems have transparency obligations to inform users they are interacting with AI.

Examples include AI for generating/manipulating images, sound, and videos (like deepfakes).

💡 Note: Deepfakes are a contentious point in the act. It’s likely that these will be considered high risk soon enough, but that’s just my contention. We’ll wait and watch.

Minimal Risk: 🎮

Examples include AI in video games and spam filters.

These systems are not regulated, but a voluntary code of conduct is suggested.

Who’s exempted from the EU AI Act

  • Applications in the Minimal Risk category

  • AI systems used for military or national security purposes are exempt.

  • AI used for pure scientific research and development is exempt.

  • Real-time algorithmic video surveillance is generally banned, but allowed for policing purposes in cases of a terrorist threat.

  • Social scoring is allowed for lawful, specific-purpose evaluations under EU and national law (talk about a loophole 😬)

What to do if you’re concerned

Step 1: Understand the Regulations:

Familiarize yourself with the AI Act’s requirements and how they apply to your AI systems. This newsletter already took care of that, so move to step 2.

Step 2: Assess Your AI Systems:

Determine which risk category your AI systems fall into (Unacceptable, High, General-Purpose, Limited, Minimal).

Need help with deciding? Book a call with me.

Step 3: Conduct Conformity Assessments:

Perform self-assessments or seek third-party conformity assessments to ensure compliance with the AI Act.

You can check out the following ‘third parties’:

However, it’s important to note that none of the above (actually, none anywhere) have been designated as “Notified Bodies” under the act. These are just companies that have been operating in the space for a while and have the highest chances of being designated as notified bodies.

I’ll wait and watch, and so should you.

If you’re going to do self-assessment, use the technical standards developed by European Standardization Organizations for guidance.

💡 Pro Tip: I can help you with that. Just book a call.

Step 4: Stay Updated:

Keep informed about any changes or updates to the AI Act and its implementation timelines.

Long and short: it’s coming by mid-2025.

Step 5: Prepare for Audits:

Be ready for potential audits by notifying bodies to verify conformity assessments (done in Step 3 — but do it twice, just to be sure).

Step 6: Enhance Transparency:

Ensure your AI systems provide clear information to users, especially for high-risk and general-purpose AI.

This is the part where using open source LLMs can be a life-saver. That just happens to be my specialty, so hit me up if you need help!

Step 7: Monitor and Adjust:

Continuously review and improve your AI systems to maintain compliance and address any new risks or requirements.

If you’ve read it this far, here’s a gif of you:

That's all for this edition, Guardians! Keep pushing the boundaries of what's possible with AI, but do it responsibly. Until next time, stay curious and keep innovating!

P.S. If you're loving these insights, don't forget to share with your fellow AI enthusiasts. Let's grow this community together!

Reply

or to participate.