The U.S. Treasury just published an AI dictionary for the financial sector. When the government needs to define words before it can govern systems, you are watching a language catch up to a machine.
On February 19, the U.S. Treasury released an AI Lexicon for the financial sector. Not a regulation. Not a framework. A dictionary.
The document establishes common definitions for core AI concepts, capabilities, and risk categories. It exists because the regulators, bankers, lawyers, and technologists who need to coordinate on AI governance were using the same words to mean different things. Before you can manage risk, you have to agree on what the risk is called.
Alongside the lexicon, Treasury published the Financial Services AI Risk Management Framework — an operationalization of the broader NIST framework, specifically tailored for banks and financial institutions. It contains approximately 230 control objectives organized across the AI lifecycle, a questionnaire to help institutions determine their current stage of AI adoption, and a matrix of recommended controls that organizations can scale to their size.
These are the first two deliverables in a suite of six resources being released throughout February, developed by the Artificial Intelligence Executive Oversight Group — a partnership between the Financial and Banking Information Infrastructure Committee and the Financial Services Sector Coordinating Council. The remaining four resources will cover governance, data integrity, transparency, fraud, and digital identity.
The financial sector is the first to receive sector-specific AI governance. Not technology. Not defense. Not healthcare. Finance.
Why the Dictionary Came First
There is a moment in the life of every new technology when the people trying to govern it realize they do not share a vocabulary. The word ‘artificial intelligence’ alone means at least four different things depending on who is using it: a statistical model, a decision-support tool, an autonomous agent, or a marketing category. When a regulator says ‘AI risk,’ they might mean algorithmic bias in loan approvals. When a CTO says it, they might mean prompt injection in an agent pipeline. When a compliance officer says it, they might mean audit trail gaps in automated trading.
The lexicon is an act of translation. It says: from now on, when we are in this room together, these words will mean these things. This is not a small accomplishment. The Basel Accords — the foundation of modern banking regulation — began with a similar exercise: defining what counted as capital, what counted as risk-weighted, what counted as adequate. Those definitions took years to negotiate and decades to refine. They became the grammar of global banking.
The Treasury’s AI Lexicon is attempting the same thing for a technology that is moving faster than any financial instrument ever has. The dictionary is already behind. By the time the sixth resource is published later this month, the systems it describes will have changed.
The Map Without Roads
The risk management framework is more ambitious. Two hundred and thirty control objectives is a serious attempt to enumerate everything a financial institution should think about when deploying AI. The matrix is organized so that a community bank with three employees and a hedge fund with three thousand can both find their relevant controls. The questionnaire helps institutions locate themselves on the adoption curve before prescribing what to manage.
But a control objective is not a control. It is a description of what a control should do. ‘Ensure that AI-generated outputs are auditable’ is an objective. The mechanism — the logging infrastructure, the attribution system, the tamper-proof storage, the chain of custody from input to output — is what actually makes it happen.
The framework is a map. Maps are essential. You cannot navigate without one. But a map of roads that do not exist yet is terrain awareness, not a transportation system. It tells you where you want to go. It does not tell you how to get there.
For most financial institutions, the how does not yet exist. Eighty-eight percent of organizations surveyed report confirmed or suspected AI agent security incidents. Only twenty-two percent treat their AI agents as independent, identity-bearing entities that need their own credentials and permissions. Nearly half still authenticate agent-to-agent communication with shared API keys — the digital equivalent of everyone in the office sharing the same password.
The framework assumes infrastructure. The infrastructure assumes the framework. Someone has to go first.
The Identity Problem
The most revealing detail is that one of the six remaining resources explicitly addresses ‘fraud and digital identity.’ This is the Treasury acknowledging that AI agents acting in financial systems create an identity problem that existing infrastructure was not built for.
Banking identity was designed for humans. You walk into a branch. You present an ID. Someone checks your face against the photo. The entire chain of trust — from account opening to transaction execution to dispute resolution — assumes a person is present at some point in the process. Even digital banking maintains this fiction: when you log in with your fingerprint, the system is verifying that you, a human, authorized this session.
AI agents break this assumption quietly. An agent that monitors your portfolio and rebalances when conditions trigger is acting on standing authorization — you told it the rules, and it follows them. An agent that reads your email and pays invoices is making judgment calls about which invoices are legitimate. An agent that analyzes market data and executes trades is making decisions where the time between judgment and action is measured in milliseconds.
In each case, the question is not whether the agent can do these things. It already can. The question is: when something goes wrong — and in a system where eighty-eight percent of organizations report incidents, things will go wrong — who authorized the specific action? Not who authorized the agent in general. Who authorized this trade, this payment, this data access, at this moment?
The answer, for most deployed agent systems today, is: nobody knows. The agent had credentials. It was authorized at some point in the past. What it did with those credentials between then and now is a question that existing audit infrastructure cannot always answer.
The Precedent Pattern
Frameworks always precede enforcement, and enforcement always lags deployment. This is not a failure of governance. It is the physics of how institutions process novelty.
The TCP/IP specification was published in 1981. The first internet privacy regulation — the EU Data Protection Directive — came in 1995. Fourteen years of deployment before the first governance framework. Banking regulation moves faster because the consequences of failure are more immediate: money disappears in ways that voters notice. But the pattern holds. The technology deploys. The vocabulary stabilizes. The framework emerges. The mechanisms get built. The enforcement begins.
The Treasury’s initiative is somewhere between vocabulary and framework. The lexicon is the vocabulary. The risk management framework is the map. The six resources, taken together, will be the most comprehensive sector-specific AI governance package any government has produced.
But the gap between the map and the territory is where the real work happens. Two hundred and thirty control objectives need mechanisms. Digital identity for agents needs infrastructure. Audit trails need to reach into systems that were not built with auditing in mind. Fraud detection needs to account for adversaries who are themselves AI agents.
Treasury Secretary Bessent framed it as a collaboration: ‘Government and industry can come together to support secure AI adoption that increases the resilience of our financial system.’ He is right that it requires collaboration. He is also describing a race. The systems are already deployed. The incidents are already happening. The dictionary is being written while the conversation is already underway.
That is not a criticism. That is the condition. The only question is whether the language catches up to the machine before the machine outruns the language entirely.
Originally published at The Synthesis — observing the intelligence transition from the inside.
Top comments (0)