Why a Voluntary Framework Matters
The NIST AI Risk Management Framework (AI RMF), released in January 2023, is a voluntary framework. Nobody is going to fine you for not using it. So why should a SaaS company care?
Three reasons. First, voluntary today doesn’t mean voluntary tomorrow. NIST frameworks have a track record of becoming the basis for regulatory requirements. The NIST Cybersecurity Framework is now referenced in federal contracting requirements and state-level regulations. The AI RMF is on the same trajectory.
Second, your customers care. Enterprise buyers, especially in financial services, healthcare, and government, are increasingly asking how vendors manage AI risk. Having a structured approach based on a recognized framework is a meaningful differentiator during procurement.
Third, it actually works. Unlike compliance checklists that gather dust, the AI RMF provides a practical structure for identifying and managing AI risks before they become incidents, lawsuits, or front-page news.
The Four Core Functions
The AI RMF is organized around four core functions that form a continuous cycle. Think of them as the AI equivalent of the NIST Cybersecurity Framework’s Identify-Protect-Detect-Respond model.
Govern
Governance is the foundation. This function establishes the organizational structures, policies, and culture needed to manage AI risk effectively. It’s not a one-time setup; it’s the ongoing commitment that makes the other three functions work.
In practice, Govern means defining who in your organization is responsible for AI risk decisions, establishing policies for AI development and deployment, creating accountability structures, and fostering a culture where teams feel empowered to raise AI risk concerns without fear of slowing down product launches.
For SaaS companies, this often starts with a cross-functional AI governance committee that includes engineering, product, legal, and compliance stakeholders.
Map
Map is about understanding context. Before you can manage AI risks, you need to understand what your AI systems do, who they affect, and what could go wrong. This function covers identifying and documenting AI systems across your organization, understanding the intended and potential unintended uses of each system, mapping stakeholders who are affected by your AI, and categorizing risks based on the specific context of deployment.
This is where many SaaS companies discover they have more AI exposure than they thought. That ML model powering recommendations, the NLP pipeline classifying support tickets, the third-party AI service embedded in your analytics: they all need to be mapped.
Measure
Measure is where you quantify and evaluate the risks you’ve mapped. This function involves defining metrics and thresholds for AI risk, testing and evaluating AI systems against those metrics, assessing bias, fairness, accuracy, and reliability, and tracking risks over time as models evolve and data shifts.
For SaaS companies, Measure often means implementing monitoring for model performance degradation, running regular bias audits on decision-making systems, and establishing clear thresholds that trigger human review or model retraining.
Manage
Manage is about taking action. Based on what you’ve measured, this function covers prioritizing risks for treatment, implementing controls and mitigations, allocating resources to the highest-priority risks, and establishing response plans for when AI systems behave unexpectedly.
This is where governance meets operations. Manage ensures that the risks you’ve identified and measured actually get addressed, not just documented.
How SaaS Companies Use It in Practice
The AI RMF is intentionally flexible. It doesn’t prescribe specific controls or technologies. Here’s how we see SaaS companies applying it:
Start with Govern and Map. Most companies begin by establishing basic governance structures and inventorying their AI systems. This foundation doesn’t require significant technical investment, just organizational commitment and documentation.
Prioritize by risk. You don’t need to implement comprehensive Measure and Manage activities for every AI system simultaneously. Prioritize the systems with the highest potential impact: those that affect people’s livelihoods, financial access, health, or rights.
Integrate with existing processes. The AI RMF works best when embedded in your existing product development and risk management processes, not bolted on as a separate compliance workstream. Incorporate AI risk assessment into your product review process, add AI-specific considerations to your change management workflow, and include AI monitoring in your existing observability stack.
Use the Playbook. NIST published a companion AI RMF Playbook with suggested actions for each subcategory. It’s the closest thing to a practical implementation guide and is worth reading alongside the framework itself.
Relationship to ISO 42001 and the EU AI Act
The NIST AI RMF, ISO 42001, and the EU AI Act are three different instruments addressing the same fundamental challenge: how do you ensure AI systems are trustworthy and responsibly managed?
NIST AI RMF provides a risk-based framework for identifying and managing AI risks. It’s flexible, voluntary, and US-focused, though internationally referenced.
ISO 42001 provides a certifiable management system for AI governance. It’s more prescriptive than the AI RMF and follows the Plan-Do-Check-Act cycle familiar from other ISO standards.
EU AI Act is binding regulation with specific prohibitions, classifications, and conformity requirements.
These three are complementary, not competing. In practice, we often see companies use the NIST AI RMF as their conceptual foundation for thinking about AI risk, implement ISO 42001 as the management system for operationalizing governance, and map their ISO 42001 controls to EU AI Act requirements to demonstrate regulatory compliance.
If you’re starting from scratch, the NIST AI RMF is often the most accessible entry point. Its flexible structure lets you build AI risk management practices incrementally, and those practices translate directly into ISO 42001 and EU AI Act readiness.
Getting Started
You don’t need a massive program to start using the AI RMF. Begin with these steps:
- Read the framework. It’s 42 pages, accessible, and well-written. The Playbook adds practical guidance.
- Inventory your AI. List every AI and ML system your organization builds, deploys, or uses, including third-party services.
- Assign ownership. Identify who is responsible for AI governance decisions. This doesn’t require a new hire; it requires clear accountability.
- Assess your highest-risk systems. Pick your top 2-3 AI systems by potential impact and walk through the Map and Measure functions for those systems.
- Build from there. Expand your program based on what you learn from those initial assessments.
At Concerto, we help SaaS companies build AI governance programs that are practical, proportionate, and aligned with the frameworks that matter. Whether you’re starting with the NIST AI RMF, pursuing ISO 42001 certification, or preparing for the EU AI Act, we can help you build a program that works. Schedule a consultation to get started.
