The NIST AI Resource Center (AIRC) was developed to support the operationalization of the NIST AI Risk Management Framework (AI RMF). The AIRC includes structured access to relevant technical documents and resources to assist in the testing, evaluation, verification, and validation (TEVV) of AI; and software tools, resources and guidance.


The AI Risk Management Framework - Key Resources

The AI RMF

  • The AI RMF - Core Framework

    NIST’s voluntary framework for conceptualizing AI Risk Management.

  • The AI RMF - Playbook

    Suggested actions and documentation practices to help achieve the outcomes in the AI RMF.

Applying the AI RMF


Technical Reports that Support AI Risk Management

Recent and featured NIST research on AI standards, measurement, terminology, and more.

  • Reducing Risks Posed by Synthetic Content NIST AI 100-4

    Report examines tools and standards to label, detect, and prevent harmful uses of generative AI.

    • Draft
    • Comment period has closed
  • Global Engagement on AI Standards NIST AI 100-5

    Establishes a global engagement plan to promote AI standards based on NIST risk management principles.

    • Final
    • Comment period has closed
  • Managing the Risk of Misuse for Dual-Use Foundation Models NIST AI 800-1

    These draft guidelines identify best practices for developers of foundation models to manage the risks that their models will be deliberately misused to cause harm.

    • Draft
    • Comment period has closed

Overview of the AI Risk Management Framework


Contact us

If you have questions or comments about the NIST AI Resource Center, AI trustworthy topics, or about NIST’s AI trustworthiness activities, send us an email: AIframework@nist.gov.