top of page
Writer's pictureDigital Team

How can organisations manage the risks associated with Artificial Intelligence (AI)?

Updated: Nov 21, 2023



All organisations developing and deploying Artificial Intelligence (AI) need to manage the risks associated with it. This includes public, private and non-government entities.


While there are numerous risk management frameworks, processes, and tools already available – the risks posed by AI systems are arguably unique. This is because the risks associated with AI can change over time, are frequently complex, inherently socio-technical, and can cause inequitable or undesirable outcomes for individuals and society.


The United States government NIST (the National Institute for Standards and Technology) has developed a framework to help manage the risks associated with artificial intelligence (AI). This is called the NIST AI Risk Management Framework (AI RMF). This was created as a result of the US National Artificial Intelligence Initiative Act of 2020.


The NIST AI RMF provides a helpful foundational reference for any organisation (including those based outside the US) to manage the risks associated with AI.


The AI RMF is voluntary and was intended to improve the ability to incorporate trustworthiness in the design, development, use, and evaluation of AI products, services, and systems. The AI RMF was developed through a collaborative process between NIST, industry, other institutions and the public.


The AI RMF is divided into two parts with Part 1 focused on how organizations can frame the risks related to AI. It then highlights how AI risks and trustworthiness are analyzed, and the characteristics of trustworthy AI systems such as validity reliability, safety, security, resilience, accountability, transparency, ‘explainability’, privacy controls, fairness, and the management of biases.


Part 2 of the AI RMF describes four specific functions required to help organizations address the risks of AI systems in practice. These functions include (1) GOVERN (2) MAP (3) MEASURE and (4) MANAGE, all with further categories and subcategories. The GOVERN function applies through all stages of AI risk management processes and procedures with the MAP, MEASURE, and MANAGE functions being applied within specific contexts and stages of the AI lifecycle.


NIST has developed supplementary resources including the NIST AI RMF Playbook, AI RMF Explainer Video, an AI RMF Roadmap, and AI RMF Crosswalk.


In March 2023 NIST also launched the Trustworthy and Responsible AI Resource Center, which is intended to support organisations aligning with the AI RMF.


Organisations developing and/or deploying AI may find the NIST AI RMF and its supplementary resources helpful in their ongoing management of AI risks.




See helpful links:


NIST AI centre - https://airc.nist.gov/Home




Comments


bottom of page