In recent months, Artificial Intelligence has increasingly become one of the hottest topics of debate. The rise of free-to-access AI chatbots and image generators has seen the conversation about AI-generated content, its risks, and its benefits thrust into the spotlight. In fact, the use of AI and AI-generated content is becoming so widespread that at the IAPP Global Privacy Summit, generative AI expert, Nina Schick predicted that 90% of online content will be AI-generated by 2025.
This rapid adoption has, somewhat predictably, had a mixed response. In Italy, the Data Protection Authority (Garante), recently banned ChatGPT over privacy concerns while in the US, the Center for AI and Digital Policy also filed a complaint with the Federal Trade Commission (FTC) over the language model chatbot. But that hasn’t stopped it from attracting over 100 million users in a little over six months. This is just one example, but software developers all over the world are creating new AI and machine-learning technologies daily which, without regulation, could put individuals’ privacy at risk.
There is little doubt that AI systems are revolutionizing industries and creating unprecedented opportunities for growth. However, with these benefits come risks and challenges that require a robust risk management approach. On January 26, 2023, the National Institute of Standards and Technology (NIST) introduced its AI Risk Management Framework (RMF) – helping organizations to develop AI technologies responsibly and manage the risk it poses to individuals and society, more widely. Keep reading to learn more about the NIST AI RMF as well as how you can action this framework against your AI systems.
What is the NIST AI Risk Management Framework?
The AI RMF was developed by NIST, along with other key stakeholders and in line with other legislative efforts, to provide organizations that develop AI systems and technologies with a practical and adaptable framework for measuring and protecting against potential harm to individuals and society. The approach outlined in the NIST AI RMF is intended to help organizations to mitigate risk, unlock opportunity, and raise the trustworthiness of their AI systems from design through to deployment.
NIST divides its framework into two parts. The first has a focus on planning and understanding, guiding organizations on how they can analyze the risks and benefits of AI and how they can define trustworthy AI systems. NIST outlines the following characteristics for organizations to measure the trustworthiness of their systems against:
The second part provides actionable guidance, in what NIST describes as the “core” of the framework. It builds out four central steps – govern, map, measure, and manage – for organizations to work into their development of AI systems. The “govern” step sits at the center as an ongoing effort, it aims to create a culture of risk management. The “map” step is aimed at recognizing the context in which risks can be identified and mapped against. The “measure” step analyzes, assesses, and tracks these identified risks. And the “manage” step prioritizes risks based on their impact in order for the appropriate mitigation measures to be applied.
How can organizations approach the NIST AI Risk Management Framework?
To ensure organizations develop and deploy AI systems responsibly, OneTrust has created a comprehensive assessment template within the AI Governance solution. The NIST AI RMF assessment template is based on the NIST AI Risk Management Framework (RMF) Playbook—a companion resource to the NIST AI RMF. The NIST AI RMF template aims to help you address a critical aspect of AI risk management. Let’s take a look at each area in more detail.
Information Gathering
The first step in managing AI risk is understanding the data and context around your AI systems. In this section, organizations gather crucial information about their AI systems, such as project id, project description, and deadline.
Govern
Effective AI risk management requires a strong governance culture across an organization's hierarchy and AI system lifecycle. The Govern section emphasizes the importance of establishing a risk management culture to support the other AI RMF functions.
Map
Context is vital for identifying and managing AI risks. The Map function establishes the context for framing risks related to an AI system. By understanding the broader contributing factors, organizations can enhance their ability to identify risks and create a strong foundation for the Measure and Manage functions.
Measure
In the Measure section, organizations leverage quantitative, qualitative, or mixed-method tools to analyze, assess, benchmark, and monitor AI risk and related impacts. By drawing upon the knowledge of AI risks identified in the Map function, organizations can better inform their risk monitoring and response efforts in the Manage function.
Manage
The Manage function utilizes systematic documentation practices established in Govern, contextual information from Map, and empirical data from Measure to address identified risks and reduce the likelihood of system failures and negative impacts. This section focuses on risk treatment, including plans to respond to, recover from, and communicate about incidents or events.
Our new NIST Framework-based assessment template empowers organizations to navigate AI risk management with confidence. By following the template, organizations can gain a deeper understanding of their AI systems, identify potential risks, and develop strategies to mitigate those risks. This comprehensive approach to AI risk management will ultimately contribute to more responsible AI development and deployment.
Speak to an expert today to see how the OneTrust Trust Intelligence platform helps put what’s good for people and planet at the center of your business.