4 Effectiveness of the AI RMF

Evaluations of AI RMF effectiveness – including ways to measure bottom-line improvements in the trustworthiness of AI systems – will be part of future NIST activities, in conjunction with the AI community.

Organizations and other users of the Framework are encouraged to periodically evaluate whether the AI RMF has improved their ability to manage AI risks, including but not limited to their policies, processes, practices, implementation plans, indicators, measurements, and expected outcomes. NIST intends to work collaboratively with others to develop metrics, methodologies, and goals for evaluating the AI RMF’s effectiveness, and to broadly share results and supporting information. Framework users are expected to benefit from:

  • enhanced processes for governing, mapping, measuring, and managing AI risk, and clearly documenting outcomes;

  • improved awareness of the relationships and tradeoffs among trustworthiness characteristics, socio-technical approaches, and AI risks;

  • explicit processes for making go/no-go system commissioning and deployment decisions;

  • established policies, processes, practices, and procedures for improving organizational accountability efforts related to AI system risks;

  • enhanced organizational culture which prioritizes the identification and management of AI system risks and potential impacts to individuals, communities, organizations, and society;

  • better information sharing within and across organizations about risks, decision-making processes, responsibilities, common pitfalls, TEVV practices, and approaches for continuous improvement;

  • greater contextual knowledge for increased awareness of downstream risks;

  • strengthened engagement with interested parties and relevant AI actors; and

  • augmented capacity for TEVV of AI systems and associated risks.