The National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) provides AI organizations with a guiding structure to operate within, and outcomes to aspire towards, all connected to their specific contexts, use cases, and skillsets. The rights-affirming framework operationalizes trustworthy AI within a culture of responsible AI practice and use.
Key activities for advancing the AI RMF are listed in the AI RMF Roadmap. These actions could be carried out by organizations independently, or in collaboration with NIST depending on available resources. Work described in the Roadmap is intended to fill gaps in knowledge, practice, or guidance in pursuit of trustworthy and responsible AI.
Characteristics of a Trustworthy AI System
The AI RMF describes AI system trustworthiness as valid and reliable, safe, secure and resilient, privacy-enhancing, explainable and interpretable, accountable and transparent, and fair - with their harmful bias managed.