Technical Reports

The section provides links to NIST documents related to the AI RMF (NIST AI-100) and the NIST AI Publication Series, as well as to NIST-funded external resources in Trustworthy and Responsible AI. New documents will be added as they are completed.

NIST AI Technical Documents

  • NIST AI 100-5: A Plan for Global Engagement on AI Standards

    Recognizing the importance of technical standards in shaping development and use of AI, this document establishes a plan for global engagement on promoting and developing AI standards guided by principles set out in the NIST AI Risk Management Framework and United States Government National Standards Strategy for Critical and Emerging Technology. This plan was prepared with broad public and private sector input.

    • - revised
  • NIST AI 100-4: Reducing Risks Posed by Synthetic Content

    NIST AI 100-4 lays out methods for detecting, authenticating and labeling synthetic content, including digital watermarking and metadata recording, where information indicating the origin or history of content such as an image or sound recording is embedded in the content to assist in verifying its authenticity. Each section of the report begins with an overview of an approach and outlines current methods for using it, concluding with areas where NIST experts recommend further research.

  • NIST AI 600-1: Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile

    This document is a cross-sectoral profile of and companion resource for the AI Risk Management Framework (AI RMF 1.0) for Generative AI. The AI RMF was released in January 2023, and is intended for voluntary use and to improve the ability of organizations to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems. The Generative AI profile was produced with the input of a public working group 2,500 participants. The guidance centers on 13 risks and more than 400 actions that developers can take to manage them.

  • NIST AI 700-1: 2024 NIST GenAI Pilot Evaluation Report — Text-to-Text Evaluation Overview and Results

    The 2024 NIST Generative AI (GenAI) Pilot Study focuses on evaluating text-to-text (T2T) generation and discrimination tasks to assess the capabilities and limitations of generative AI models and AI detectors. The study aims to measure the effectiveness of AI-generated text in mimicking human writing and the ability of AI-based discriminators to distinguish between human- and AI-generated content.

  • NIST AI 700-2: 2024 NIST ARIA Pilot Evaluation Report — Assessing Risks and Impacts of AI (ARIA)

    Current approaches to evaluation of artificial intelligence (AI) often do not account for risks and impacts of AI systems in the real world. Launched in May 2024, NIST’s Assessing Risks and Impacts of AI (ARIA) program pairs people with AI applications and studies application behaviors as well as positive and negative impacts on human testers in scenario-based interactions. This new approach to AI evaluation can better estimate real-world risks and impacts of AI systems to humans, enabling organizations to improve the trustworthiness of their AI systems and make more informed decisions when acquiring or deploying AI.

  • NIST AI 800-1: Managing the Risk of Misuse for Dual-Use Foundation Models

    These draft guidelines outline best practices for developers of foundation models to mitigate potential risks associated with model misuse.

NIST Special Publications

NIST Interagency or Internal Reports (NISTIR)

  • NIST AI 100-2: A Taxonomy and Terminology of Adversarial Machine Learning

    This document develops a taxonomy of concepts and defines terminology in the field of AML. The terminology, arranged in an alphabetical glossary, defines key terms associated with the security of ML components of an AI system.

  • NIST AI 100-3: The Language of Trustworthy AI

    This publication establishes a foundational shared vocabulary to operationalize the NIST AI Risk Management Framework (AI RMF). It documents the development of a glossary of over 500 terms, providing multiple definitions to bridge communication gaps across fields like computer science, law, and social sciences. This resource ensures technical accuracy and consistency in the evolving landscape of responsible AI.

    Note: A final glossary release will be published at a later date.

  • NISTIR 8312: Four Principles of Explainable Artificial Intelligence

    This paper presents four principles for explainable artificial intelligence (AI) systems.

  • NISTIR 8367: Psychological Foundations of Explainability and Interpretability of Artificial Intelligence

    This paper examines the distinction between interpretability and explainability as requirements for machine learning systems. It reviews the relevant literature in experimental psychology on interpretation and comprehension.

  • U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools

    This 2019 report provides a strategic framework for federal agencies to lead in the development of reliable, robust, and trustworthy AI standards. It emphasizes the importance of public-private partnerships and international engagement to ensure technical standards reflect federal priorities for innovation and public trust. Focusing on nine critical areas—including data and knowledge, human interactions, metrics, networking, performance testing and reporting methodology, safety, risk management, and trustworthiness—the plan guides agencies in making informed decisions that bolster U.S. economic and national security.

NIST-funded external resources

  • Artificial Intelligence and the Courts: Materials for Judges

    AAAS Center for Scientific Responsibility and Justice

    This series provides judges with an essential framework for navigating complexities of AI in the courtroom. It addresses critical issues such as the admissibility of AI-generated evidence, the "black box" problem of algorithmic transparency, and the risks of deepfakes and training data. By exploring how AI transforms judicial reasoning and case management, the series equips legal professionals with the knowledge to balance technological innovation and scientific responsibility.