Organizations that develop or deploy AI systems face a wide range of risks, including legal and regulatory challenges, potential reputational damage, and ethical issues such as bias and transparency concerns. Through proper AI governance, companies can mitigate these risks, ensuring that AI systems are not only fair and accountable but also contribute positively to society. However, even organizations committed to responsible AI often struggle to measure whether they are successfully meeting these goals.
To address this, the IEEE-USA AI Policy Committee has released the “Flexible Maturity Model for AI Governance,” based on the NIST AI Risk Management Framework (RMF). This model helps organizations assess and monitor their progress in responsible AI governance by providing clear guidelines and stages of development.
The Role of NIST in AI Risk Management
The National Institute of Standards and Technology (NIST) created the AI Risk Management Framework, a widely respected document that outlines best practices for AI risk management. However, while NIST’s framework offers valuable guidance, it does not provide detailed steps for organizations to move toward these best practices or assess their current adherence. This lack of clarity can make implementation challenging for companies, and external stakeholders, like investors and consumers, may also find it difficult to evaluate how well organizations are governing their AI systems.
The new IEEE-USA Maturity Model fills this gap by helping organizations identify their current stage of responsible AI governance, track their improvements, and set a roadmap for the future.
Understanding the Maturity Model and Its Framework
The IEEE-USA Maturity Model is built around the four pillars of the NIST RMF, which encourage organizations to manage AI risks in a way that fosters trustworthy AI systems:
- Map: Identify the context in which the AI system operates and assess the associated risks.
- Measure: Evaluate and monitor these identified risks.
- Manage: Take prioritized actions based on the severity of the risks.
- Govern: Establish a culture of risk management within the organization.
This structure promotes continuous dialogue and action regarding AI governance, enabling companies to address risks in a structured and methodical way.
A Flexible AI Governance Questionnaire
At the core of the IEEE-USA Maturity Model is a flexible questionnaire based on the NIST framework. This questionnaire includes specific, actionable statements such as “We evaluate and document bias and fairness issues caused by our AI systems,” which organizations can use to assess their progress. By focusing on concrete, verifiable actions, the maturity model avoids vague statements like “our AI systems are fair.”
The questions are categorized according to the phases of the AI lifecycle: planning and design, data collection and model building, and deployment. This allows organizations to focus on relevant topics depending on where their AI systems are in the development cycle.
Scoring and Assessment Criteria
The maturity model uses scoring guidelines that reflect the ideals outlined in the NIST framework, focusing on three key criteria:
- Robustness: Ranging from ad-hoc activities to fully integrated processes.
- Coverage: Participation in activities can range from minimal to comprehensive.
- Input Diversity: Teams can range from isolated groups to diverse input from various stakeholders, both internal and external.
Assessors can evaluate individual statements or broader themes, depending on the granularity they wish to achieve. Each score must be supported by evidence, which can include internal documents like procedure manuals or external reports.
Once individual statements are scored, the results are aggregated to generate an overall assessment of the organization’s AI governance maturity. Companies can also choose to aggregate scores based on the four NIST pillars—Mapping, Measurement, Management, and Governance.
Identifying Weaknesses and Areas for Improvement
The aggregation of scores can reveal systemic weaknesses in a company’s AI governance. For instance, a company that scores well in “governance” but poorly in other areas may have developed strong policies but is struggling with implementation.
Another way to assess the scores is to categorize them by specific AI governance dimensions like fairness, privacy, transparency, security, and explainability. This approach helps organizations determine if they are focusing too heavily on certain areas while neglecting others. For example, a company might emphasize transparency but fall short in addressing AI bias.
On the Path to Better Decision-Making
By using the maturity model internally, organizations can assess where they stand in terms of responsible AI and identify specific steps for improvement. Regular assessments can help track progress and inform decision-making processes.
The model also provides value to external stakeholders such as investors, consumers, and buyers, who can use it to evaluate a company’s commitment to responsible AI. With the IEEE-USA Maturity Model and NIST AI RMF, organizations are better equipped to implement and track responsible AI governance, improving their AI systems and ensuring they are ethical, accountable, and beneficial to society.