Technical and Policy Documents
The section provides direct links to NIST documents related to the AI RMF (NIST AI-100) and NIST AI Publication Series, as well as NIST-funded external resources in the area of Trustworthy and Responsible AI. New documents will be added as they are completed.
AI Executive Order Documents
The President’s Executive Order (EO) on Safe, Secure, and Trustworthy Artificial Intelligence (14110) issued on October 30, 2023, charges multiple agencies – including NIST – with producing guidelines and taking other actions. NIST solicits and considers comments on many of the documents it produces under the EO. Information about final and draft publications appears below. When NIST seeks comments from the public, submission instructions may be found in each document.
Additional information about NIST’s work under this EO is available.
-
NIST AI 100-4: Reducing Risks Posed by Synthetic Content
This publication informs, and is complementary to, a separate report on understanding the provenance and detection of synthetic content that AI EO Section 4.5(a) tasks NIST with providing to the White House. NIST AI 100-4 lays out methods for detecting, authenticating and labeling synthetic content, including digital watermarking and metadata recording, where information indicating the origin or history of content such as an image or sound recording is embedded in the content to assist in verifying its authenticity. Each section of the report begins with an overview of an approach and outlines current methods for using it, concluding with areas where NIST experts recommend further research.
-
NIST AI 100-5: A Plan for Global Engagement on AI Standards
Recognizing the importance of technical standards in shaping development and use of Artificial Intelligence (AI), the President’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO 14110) calls for “a coordinated effort… to drive the development and implementation of AI-related consensus standards, cooperation and coordination, and information sharing” internationally. Specifically, the EO tasks the Secretary of Commerce to “establish a plan for global engagement on promoting and developing AI standards… guided by principles set out in the NIST AI Risk Management Framework and United States Government National Standards Strategy for Critical and Emerging Technology” (NSSCET). This plan, prepared with broad public and private sector input, fulfills the EO’s mandate. The plan will be followed by an implementation plan describing the engagement actions that NIST and other U.S. government agencies will take over the next 180 days.
-
NIST AI 600-1: Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile
This document is a cross-sectoral profile of and companion resource for the AI Risk Management Framework (AI RMF 1.0) for Generative AI, pursuant to President Biden’s Executive Order (EO) 14110 on Safe, Secure, and Trustworthy Artificial Intelligence. The AI RMF was released in January 2023, and is intended for voluntary use and to improve the ability of organizations to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems. public working group of more than 2,500 members, the guidance centers on a list of 13 risks and more than 400 actions that developers can take to manage them.
-
NIST AI 800-1: Managing the Risk of Misuse for Dual-Use Foundation Models
These draft guidelines identify best practices for developers of foundation models to manage the risks that their models will be deliberately misused to cause harm, pursuant to Section 4.1(a)(ii) and Section 4.1(a)(ii)(A) in Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI).
NIST Special Publications
-
NIST SP 1270: Toward a Standard for Identifying and Managing Bias in Artificial Intelligence
The intent of this document is to surface the salient issues in the challenging area of AI bias, and to provide a first step on the roadmap for developing detailed socio-technical guidance for identifying bias and managing bias.
-
NIST SP 800-218A: Secure Software Development Practices for Generative AI and Dual-Use Foundation Models
This document augments the secure software development practices and tasks defined in Secure Software Development Framework (SSDF) version 1.1 by adding practices, tasks, recommendations, considerations, notes, and informative references that are specific to AI model development throughout the software development life cycle. These additions are documented in the form of an SSDF Community Profile to support Executive Order (EO) 14110, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which tasked NIST with “developing a companion resource to the [SSDF] to incorporate secure development practices for generative AI and for dual-use foundation models.” This Community Profile is intended to be useful to the producers of AI models, the producers of AI systems that use those models, and the acquirers of those AI systems. This Profile should be used in conjunction with NIST Special Publication (SP) 800-218, Secure Software Development Framework (SSDF) Version 1.1: Recommendations for Mitigating the Risk of Software Vulnerabilities.
NIST Interagency or Internal Reports (NISTIR)
-
NIST AI 100-2: A Taxonomy and Terminology of Adversarial Machine Learning
This document develops a taxonomy of concepts and defines terminology in the field of AML. The terminology, arranged in an alphabetical glossary, defines key terms associated with the security of ML components of an AI system.
-
NIST AI 100-3: The Language of Trustworthy AI
An In-Depth Glossary of Terms
-
NISTIR 8312: Four Principles of Explainable Artificial Intelligence
The authors introduce four principles for explainable artificial intelligences (AI) that comprise fundamental properties of explainable AI systems.
-
NISTIR 8367: Psychological Foundations of Explainability and Interpretability of Artificial Intelligence
In this paper, the author makes the case that interpretability and explainability are distinct requirements for machine learning systems. An overview of the literature in experimental psychology pertaining to interpretation and comprehension is made.
NIST-funded external resources
-
Artificial Intelligence and the Courts: Materials for Judges
AAAS Center for Scientific Responsibility and Justice