The First Comprehensive AI Law
The EU AI Act entered into force in August 2024, making it the world’s first comprehensive legal framework for artificial intelligence. Unlike voluntary standards or frameworks, this is binding regulation with real enforcement teeth: fines up to 35 million euros or 7% of global annual turnover, whichever is higher.
If you’re a SaaS company thinking “we’re US-based, this doesn’t apply to us,” think again. The EU AI Act has extraterritorial reach. If your AI system’s output is used by people in the EU, or if your system is placed on the EU market, you’re in scope regardless of where your company is headquartered. Sound familiar? It’s the same jurisdictional approach that caught companies off guard with GDPR.
Enforcement Timeline
The regulation is rolling out in phases:
- February 2025: Prohibitions on unacceptable-risk AI systems took effect
- August 2025: Requirements for general-purpose AI models apply
- August 2026: Most obligations for high-risk AI systems become enforceable
- August 2027: Remaining high-risk system requirements (those in Annex I) take full effect
This phased approach gives companies time to prepare, but “time to prepare” has a way of disappearing faster than expected. If you haven’t started assessing your exposure, you’re already behind the curve.
The Risk Classification System
The EU AI Act organizes AI systems into four risk tiers, and your obligations depend entirely on where your system falls:
Unacceptable Risk (Banned)
These AI practices are prohibited outright: social scoring by governments, real-time biometric identification in public spaces (with narrow exceptions), manipulation of vulnerable groups, and emotion recognition in workplaces and schools. If your SaaS product does any of these, stop.
High Risk
This is where most of the regulatory weight lands. AI systems are classified as high-risk if they’re used in areas like biometric identification, critical infrastructure management, education and vocational training (admissions, grading), employment (recruiting, hiring, performance evaluation), access to essential services (credit scoring, insurance pricing), law enforcement, or immigration and border control.
High-risk systems face the heaviest obligations: risk management systems, data governance requirements, technical documentation, transparency and human oversight provisions, accuracy and robustness standards, and conformity assessments before market placement.
Limited Risk
Systems with limited risk face transparency obligations. If your AI system generates or manipulates content (deepfakes, chatbots), you need to disclose that users are interacting with AI. Most customer-facing chatbots and AI-generated content tools fall here.
Minimal Risk
AI systems with minimal risk, like spam filters or AI-powered video game mechanics, face no specific obligations under the Act. Most AI applications fall into this category.
Which SaaS Companies Should Pay Attention?
If your product includes any of the following, you likely have obligations under the EU AI Act:
- AI-powered hiring or HR tools. Resume screening, candidate ranking, performance evaluation, or workforce management systems are explicitly classified as high-risk.
- Credit or financial risk assessment. Any AI that influences lending decisions, insurance pricing, or creditworthiness evaluations.
- Content generation or chatbots. If your product generates text, images, or audio using AI, or deploys conversational AI, you have transparency obligations at minimum.
- Decision-support systems in regulated industries. AI that assists decisions in healthcare, education, or public services.
- General-purpose AI models. If you develop or fine-tune foundation models, the GPAI provisions apply to you.
Even if your AI features seem low-risk, the Act requires you to be able to demonstrate that classification. “We didn’t think about it” isn’t a defense.
Practical Steps for SaaS Companies
1. Inventory Your AI Systems
You can’t assess risk for systems you haven’t identified. Catalog every AI and ML component in your product, including third-party AI services you integrate. Document what each system does, what data it processes, and who it affects.
2. Classify Your Risk Level
Map each AI system to the EU AI Act’s risk tiers. Be honest in your assessment. If there’s ambiguity, err on the side of higher classification until you can confirm otherwise.
3. Assess Your Supply Chain
If you’re using third-party AI models or services (OpenAI, Anthropic, cloud ML services), understand your obligations as a “deployer” versus a “provider.” The Act assigns different responsibilities depending on your role in the AI value chain.
4. Implement Proportionate Governance
For high-risk systems, you’ll need a risk management system, quality management system, technical documentation, human oversight mechanisms, and monitoring processes. For limited-risk systems, focus on transparency requirements.
5. Document Everything
The EU AI Act is documentation-heavy. Technical documentation, conformity assessments, risk assessments, and records of decisions all need to be maintained. Start building these habits now.
How the EU AI Act Relates to ISO 42001
The EU AI Act tells you what you must do. ISO 42001 gives you a management system for doing it consistently and demonstrably.
ISO 42001 certification doesn’t automatically satisfy EU AI Act requirements, but there’s substantial overlap. The AI risk assessment, AI system inventory, impact assessment, and human oversight requirements in ISO 42001 directly support EU AI Act compliance. Companies pursuing both find that ISO 42001 provides the operational backbone for meeting regulatory obligations.
The NIST AI Risk Management Framework offers another complementary lens, particularly for US-based companies looking to align with both voluntary best practices and emerging regulation. Read our guide to the NIST AI RMF for more on how these frameworks fit together.
Don’t Wait for Enforcement
The phased timeline is not an invitation to procrastinate. Companies that waited until GDPR enforcement to start their compliance programs paid the price in rushed implementations, higher costs, and early enforcement actions. The EU AI Act will follow the same pattern.
At Concerto, we help SaaS companies navigate the intersection of AI governance standards and regulatory requirements. Whether you need a risk classification assessment, a gap analysis against the EU AI Act, or a full AI governance program built on ISO 42001, we can help. Schedule a consultation to discuss your specific situation.
