Use Cases

The voluntary NIST AI Risk Management Framework was developed through a collaborative process by industry, civil society, academia, and government stakeholders.

The Framework is designed to equip organizations and individuals with approaches that increase the trustworthiness of AI systems, and to help foster their responsible design, development, and deployment.

While NIST does not validate or endorse any individual organization or its approach to using the AI RMF, below we provide documented usecases of the NIST AI RMF being put into action. NIST encourages industry, civil society, academia, and government stakeholders to submit additional usecases to AIframework@nist.gov.

Government

Industry


Contact us

If you have questions or comments about the Trustworthy and Responsible AI Resource Center, AI trustworthy topics, or would like to submit a use case, send us an email at: AIframework@nist.gov.