The NIST AI RMF Playbook is designed to inform AI actors and make the AI RMF more usable. The AI RMF Playbook provides actionable suggestions to help:
- produce or evaluate trustworthy AI systems
- cultivate a responsible AI environment where risk and impact are taken into account.
- increase organizational capacity for comprehensive socio-technical approaches to the design, development, deployment, and evaluation of AI technology.
The NIST AI RMF Playbook is a community resource. Its success depends upon regular input and feedback. Interested parties are invited to provide input or contribute to the Playbook. Information about the kinds of input to contribute, along with criteria for NIST to include this information in the Playbook, is provided in this list of FAQs.
The NIST AI RMF is a consensus-based resource intended for voluntary use in addressing risks in the design, development, use, and evaluation of AI systems. Currently in draft, the AI RMF describes why a risk management framework for AI is important, and explains the motivation for using the Framework, its scope, audience, and the framing of AI risk and trustworthiness. The draft AI RMF Core provides outcomes that promote and enable dialogue, understanding, and activities for AI risk management.
The NIST AI RMF Playbook is intended to serve as a companion to the Framework and part of a broader knowledge base. It enables users to navigate the AI RMF, and contains actionable suggestions for achieving the outcomes described in Tables 1-4 of the AI RMF Core. While the AI RMF will be revised from time-to-time, the Playbook will be more dynamic and provide a venue for the AI community to build out suggested practices and learn from others.
Both the AI RMF and the NIST AI RMF Playbook will be housed in a forthcoming NIST Trustworthy and Responsible AI Resource Center, along with other interactive materials, information, and resources for the community to utilize.
The NIST AI RMF Playbook can help organizations cultivate an environment focused on AI risks, and transform how they approach the development, deployment, and use of trustworthy AI technology. Those interested in using the AI RMF functions to enhance their risk management posture can utilize suggested actions from the Playbook to fit their interests and needs. Anyone is free to repurpose portions of the Playbook to create their own internal resources or technical guidance.
The NIST AI RMF Playbook is not a one-size-fits-all resource – and it is neither a checklist nor an ordered list of steps for AI actors to implement. Playbook users are not expected to review or implement all of the suggestions or to go through it as an ordered series of steps. Rather, it is a companion resource with supplementary information related to Tables 1-4 of the AI RMF which correspond to the Govern, Map, Measure, and Manage functions and their accompanying categories and sub-categories. The suggestions are provided in an attempt to make the AI RMF more actionable in the pursuit of delivering trustworthy and responsible AI systems.
Material in the NIST AI RMF Playbook is meant to stand alone within a given function-category combination (e.g., GOVERN-2 or MAP-1). Therefore, users may find material repeated across the categories, and categories may blend together. This is by design, so users can avoid having to move from category to category to find the relevant content of interest. Users may also notice similar suggested actions across the Playbook, but the information may differ in level of specificity depending on context. When reviewing the Playbook, users should note that:
- Suggested NIST AI RMF Playbook actions are not intended to be comprehensive, but instead provide foundational perspectives on trustworthy and responsible AI concepts and practices to date. To remain non-prescriptive, suggestions are specific but not too granular.
- Transparency guidance can be used by organizations to document their AI risk management activities.
- Suggested references for additional reading are intended to serve as a sampling from the available literature on the given topic or subtopic area.
The AI RMF will be published in January 2023 after additional public input, along with the first release of the NIST AI RMF Playbook covering all four AI RMF functions (Govern, Map, Measure, Manage). Since the NIST AI RMF Playbook is intended to be a dynamic source of online information there will not be a “final version.” Interested parties can submit contributions of guidance materials and feedback on a regular basis.
NIST welcomes suggestions about including references to existing resources or reviewing new resources designed specifically to help users of the AI RMF. Comments can be suggested for the Playbook at any time and will be reviewed and integrated on a semi-annual basis. NIST is requesting a first round of comments by September 29, 2022. Comments also will be welcomed during discussions at a third AI RMF workshop on October 18-19, 2022, and beyond.
Send an email to AIframework@nist.gov
Criteria for inclusion of contributions in the NIST AI RMF Playbook: In order to be considered by NIST for inclusion in the Playbook, a resource must be publicly available on the Internet. NIST welcomes free resources from for-profit entities. Pay-for resources from non-profit entities also meet the basic criteria for inclusion. If a resource meets these criteria, a description of the resource should be sent to AIframework@nist.gov.
NIST may include commercial entities, equipment, or materials in its guidance in order to support Framework understanding and use. Such identification does not imply recommendation or endorsement by NIST, nor that the entities, materials, or equipment are necessarily the best available for the purpose.
NIST will regularly engage with stakeholders on the AI RMF and the NIST AI RMF Playbook through a variety of means. Users’ feedback about their experiences will play a major role in modifications and improvements in the Framework and Playbook. That can be provided formally or informally. For more information see the