Appendix B: How AI Risks Differ from Traditional Software Risks
As with traditional software, risks from AI-based technology can be bigger than an enterprise, span organizations, and lead to societal impacts. AI systems also bring a set of risks that are not comprehensively addressed by current risk frameworks and approaches. Some AI system features that present risks also can be beneficial. For example, pre-trained models and transfer learning can advance research and increase accuracy and resilience when compared to other models and approaches. Identifying contextual factors in the map function will assist AI actors in determining the level of risk and potential management efforts.
Compared to traditional software, AI-specific risks that are new or increased include the following:
-
The data used for building an AI system may not be a true or appropriate representation of the context or intended use of the AI system, and the ground truth may either not exist or not be available. Additionally, harmful bias and other data quality issues can affect AI system trustworthiness, which could lead to negative impacts.
-
AI system dependency and reliance on data for training tasks, combined with increased volume and complexity typically associated with such data.
-
Intentional or unintentional changes during training may fundamentally alter AI system performance.
-
Datasets used to train AI systems may become detached from their original and intended context or may become stale or outdated relative to deployment context.
-
AI system scale and complexity (many systems contain billions or even trillions of decision points) housed within more traditional software applications.
-
Use of pre-trained models that can advance research and improve performance can also increase levels of statistical uncertainty and cause issues with bias management, scientific validity, and reproducibility.
-
Higher degree of difficulty in predicting failure modes for emergent properties of large-scale pre-trained models.
-
Privacy risk due to enhanced data aggregation capability for AI systems.
-
AI systems may require more frequent maintenance and triggers for conducting corrective maintenance due to data, model, or concept drift.
Increased opacity and concerns about reproducibility.
-
Underdeveloped software testing standards and inability to document AI-based practices to the standard expected of traditionally engineered software for all but the simplest of cases.
-
Difficulty in performing regular AI-based software testing, or determining what to test, since AI systems are not subject to the same controls as traditional code development.
-
Computational costs for developing AI systems and their impact on the environment and planet.
-
Inability to predict or detect the side effects of AI-based systems beyond statistical measures.
Privacy and cybersecurity risk management considerations and approaches are applicable in the design, development, deployment, evaluation, and use of AI systems. Privacy and cybersecurity risks are also considered as part of broader enterprise risk management considerations, which may incorporate AI risks. As part of the effort to address AI trustworthiness characteristics such as “Secure and Resilient” and “Privacy-Enhanced,” organizations may consider leveraging available standards and guidance that provide broad guidance to organizations to reduce security and privacy risks, such as, but not limited to, the NIST Cybersecurity Framework, the NIST Privacy Framework, the NIST Risk Management Framework, and the Secure Software Development Framework. These frameworks have some features in common with the AI RMF. Like most risk management approaches, they are outcome-based rather than prescriptive and are often structured around a Core set of functions, categories, and subcategories. While there are significant differences between these frameworks based on the domain addressed – and because AI risk management calls for addressing many other types of risks – frameworks like those mentioned above may inform security and privacy considerations in the map, measure, and manage functions of the AI RMF.
At the same time, guidance available before publication of this AI RMF does not comprehensively address many AI system risks. For example, existing frameworks and guidance are unable to:
adequately manage the problem of harmful bias in AI systems;
confront the challenging risks related to generative AI;
-
comprehensively address security concerns related to evasion, model extraction, membership inference, availability, or other machine learning attacks;
-
account for the complex attack surface of AI systems or other security abuses enabled by AI systems; and
-
consider risks associated with third-party AI technologies, transfer learning, and off-label use where AI systems may be trained for decision-making outside an organization’s security controls or trained in one domain and then “fine-tuned” for another.
Both AI and traditional software technologies and systems are subject to rapid innovation. Technology advances should be monitored and deployed to take advantage of those developments and work towards a future of AI that is both trustworthy and responsible.