The US Treasury has published several documents designed for the US financial services sector, suggesting a structured approach to managing AI risks in operations and policy.

Sector-specific framework

AI systems introduce risks that existing technology governance frameworks don't address adequately. These risks include algorithmic bias, limited transparency in decision processes, cyber vulnerabilities, and complex dependencies between systems and data. LLMs create concerns because their behaviour can be difficult to interpret or predict.

The Financial Services AI Risk Management Framework (FS AI RMF) is being positioned as an extension to the NIST framework, with additional sector-specific controls and practical implementation guidelines. The objective is to help financial institutions identify, evaluate, manage, and govern the risks associated with AI systems, while allowing firms to continue adopting AI technologies responsibly.

Core structure

The FS AI RMF connects AI governance with broader governance, risk, and compliance processes already affecting financial institutions. The framework contains four main components: an AI adoption stage questionnaire, a risk and control matrix, a guidebook explaining how to apply the framework, and a control objective reference guide.

The adoption stage questionnaire helps organizations determine the maturity of their AI use, evaluating factors like the business impact of AI, governance arrangements, deployment models, use of third-party AI providers, organizational objectives, and data sensitivity.

Based on this assessment, organizations are classified into four stages of AI adoption: initial stage, minimal stage, evolving stage, and embedded stage. These stages help institutions focus their efforts on controls appropriate to their maturity level.

Risk and control

The control objectives for each AI adoption stage address governance and operational topics including data quality management, fairness and bias monitoring, cybersecurity controls, transparency of AI decision processes, and operational resilience.

The framework recommends maintaining incident response procedures specific to AI systems and creating a central repository for tracking AI incidents, processes that will help organizations detect failures and improve governance over time.

Trustworthy AI

The framework incorporates principles for trustworthy AI defined as validity and reliability, safety, security and resilience, accountability, transparency, explainability, privacy protection, and fairness. These provide a foundation for evaluating AI systems along their full lifecycle.

For senior leaders in financial institutions, the FS AI RMF offers a guide to integrating AI into existing risk management frameworks. It states the need for coordination in different business functions in the organization. Technology teams, risk officers, compliance specialists, and business units all need to participate in the AI governance process.