← Back to Blog
February 6, 2026 · Concerto Compliance

The EU AI Act: What SaaS Companies Need to Know

EU AI Act AI Governance Compliance

The First Comprehensive AI Law

The EU AI Act entered into force in August 2024, making it the world’s first comprehensive legal framework for artificial intelligence. Unlike voluntary standards or frameworks, this is binding regulation with real enforcement teeth: fines up to 35 million euros or 7% of global annual turnover, whichever is higher.

If you’re a SaaS company thinking “we’re US-based, this doesn’t apply to us,” think again. The EU AI Act has extraterritorial reach. If your AI system’s output is used by people in the EU, or if your system is placed on the EU market, you’re in scope regardless of where your company is headquartered. Sound familiar? It’s the same jurisdictional approach that caught companies off guard with GDPR.

Enforcement Timeline

The regulation is rolling out in phases:

This phased approach gives companies time to prepare, but “time to prepare” has a way of disappearing faster than expected. If you haven’t started assessing your exposure, you’re already behind the curve.

The Risk Classification System

The EU AI Act organizes AI systems into four risk tiers, and your obligations depend entirely on where your system falls:

Unacceptable Risk (Banned)

These AI practices are prohibited outright: social scoring by governments, real-time biometric identification in public spaces (with narrow exceptions), manipulation of vulnerable groups, and emotion recognition in workplaces and schools. If your SaaS product does any of these, stop.

High Risk

This is where most of the regulatory weight lands. AI systems are classified as high-risk if they’re used in areas like biometric identification, critical infrastructure management, education and vocational training (admissions, grading), employment (recruiting, hiring, performance evaluation), access to essential services (credit scoring, insurance pricing), law enforcement, or immigration and border control.

High-risk systems face the heaviest obligations: risk management systems, data governance requirements, technical documentation, transparency and human oversight provisions, accuracy and robustness standards, and conformity assessments before market placement.

Limited Risk

Systems with limited risk face transparency obligations. If your AI system generates or manipulates content (deepfakes, chatbots), you need to disclose that users are interacting with AI. Most customer-facing chatbots and AI-generated content tools fall here.

Minimal Risk

AI systems with minimal risk, like spam filters or AI-powered video game mechanics, face no specific obligations under the Act. Most AI applications fall into this category.

Which SaaS Companies Should Pay Attention?

If your product includes any of the following, you likely have obligations under the EU AI Act:

Even if your AI features seem low-risk, the Act requires you to be able to demonstrate that classification. “We didn’t think about it” isn’t a defense.

Practical Steps for SaaS Companies

1. Inventory Your AI Systems

You can’t assess risk for systems you haven’t identified. Catalog every AI and ML component in your product, including third-party AI services you integrate. Document what each system does, what data it processes, and who it affects.

2. Classify Your Risk Level

Map each AI system to the EU AI Act’s risk tiers. Be honest in your assessment. If there’s ambiguity, err on the side of higher classification until you can confirm otherwise.

3. Assess Your Supply Chain

If you’re using third-party AI models or services (OpenAI, Anthropic, cloud ML services), understand your obligations as a “deployer” versus a “provider.” The Act assigns different responsibilities depending on your role in the AI value chain.

4. Implement Proportionate Governance

For high-risk systems, you’ll need a risk management system, quality management system, technical documentation, human oversight mechanisms, and monitoring processes. For limited-risk systems, focus on transparency requirements.

5. Document Everything

The EU AI Act is documentation-heavy. Technical documentation, conformity assessments, risk assessments, and records of decisions all need to be maintained. Start building these habits now.

How the EU AI Act Relates to ISO 42001

The EU AI Act tells you what you must do. ISO 42001 gives you a management system for doing it consistently and demonstrably.

ISO 42001 certification doesn’t automatically satisfy EU AI Act requirements, but there’s substantial overlap. The AI risk assessment, AI system inventory, impact assessment, and human oversight requirements in ISO 42001 directly support EU AI Act compliance. Companies pursuing both find that ISO 42001 provides the operational backbone for meeting regulatory obligations.

The NIST AI Risk Management Framework offers another complementary lens, particularly for US-based companies looking to align with both voluntary best practices and emerging regulation. Read our guide to the NIST AI RMF for more on how these frameworks fit together.

Don’t Wait for Enforcement

The phased timeline is not an invitation to procrastinate. Companies that waited until GDPR enforcement to start their compliance programs paid the price in rushed implementations, higher costs, and early enforcement actions. The EU AI Act will follow the same pattern.

At Concerto, we help SaaS companies navigate the intersection of AI governance standards and regulatory requirements. Whether you need a risk classification assessment, a gap analysis against the EU AI Act, or a full AI governance program built on ISO 42001, we can help. Schedule a consultation to discuss your specific situation.

Keep Reading

More articles

Want expert guidance on this?

Our team lives and breathes compliance. Book a free call and we'll help you turn these insights into action.

Talk to Our Team →

I've never met a team who could make compliance as easy, and dare I say FUN!

Cailey Ryckman, VP of Finance

Rainforest Pay