Technical Reports

The section provides direct links to NIST documents related to the AI RMF (NIST AI-100) and NIST AI Publication Series, as well as NIST-funded external resources in the area of Trustworthy and Responsible AI. New documents will be added as they are completed.

NIST AI Technical Documents

  • NIST AI 100-4: Reducing Risks Posed by Synthetic Content

    NIST AI 100-4 lays out methods for detecting, authenticating and labeling synthetic content, including digital watermarking and metadata recording, where information indicating the origin or history of content such as an image or sound recording is embedded in the content to assist in verifying its authenticity. Each section of the report begins with an overview of an approach and outlines current methods for using it, concluding with areas where NIST experts recommend further research.

    • Draft
    • Comment period has closed
  • NIST AI 100-5: A Plan for Global Engagement on AI Standards

    Recognizing the importance of technical standards in shaping development and use of Artificial Intelligence (AI), this document establishes a plan for global engagement on promoting and developing AI standards guided by principles set out in the NIST AI Risk Management Framework and United States Government National Standards Strategy for Critical and Emerging Technology. This plan, prepared with broad public and private sector input.

    • - Updated on Monday, August 5th 2024
    • Final
    • Comment period has closed
  • NIST AI 600-1: Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile

    This document is a cross-sectoral profile of and companion resource for the AI Risk Management Framework (AI RMF 1.0) for Generative AI. The AI RMF was released in January 2023, and is intended for voluntary use and to improve the ability of organizations to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems. public working group of more than 2,500 members, the guidance centers on a list of 13 risks and more than 400 actions that developers can take to manage them.

    • Final
    • Comment period has closed
  • NIST AI 800-1: Managing the Risk of Misuse for Dual-Use Foundation Models

    These draft guidelines identify best practices for developers of foundation models to manage the risks that their models will be deliberately misused to cause harm.

    • Draft
    • Comment period has closed

NIST Special Publications

NIST Interagency or Internal Reports (NISTIR)

NIST-funded external resources