Use Cases

The voluntary NIST AI Risk Management Framework was developed through a collaborative process by industry, civil society, academia, and government stakeholders.

The Framework is designed to equip organizations and individuals with approaches that increase the trustworthiness of AI systems, and to help foster their responsible design, development, and deployment.

While NIST does not validate or endorse any individual organization or its approach to using the AI RMF, below we provide documented usecases of the NIST AI RMF being put into action. NIST encourages industry, civil society, academia, and government stakeholders to submit additional usecases to AIframework@nist.gov.

View the Workday use case

View the City of San José, CA use case, Playbook

View a use case for “Autonomous Vehicle Risk Management Profile for Traffic Sign Recognition”

View the Google Deep Mind AI RMF template


Contact us

If you have questions or comments about the Trustworthy and Responsible AI Resource Center, AI trustworthy topics, or would like to submit a use case, send us an email at: AIframework@nist.gov.