Mapping the unknown: How to assess the true costs of AI incidents
The costs of artificial intelligence misuse and malfunction are real and growing, but nobody knows the total scope—or the true financial impact—of such incidents because there is no consistent, systemic approach to track and measure them.
Poorly managed AI agents, for example, have made recent headlines by knocking major service providers offline, causing financial damage to both the host companies and their customers. The US Securities and Exchange Commission has warned companies repeatedly to accurately disclose AI-related risks and taken steps to punish those engaged in “AI washing,” or the misrepresentation of AI use.
Yet as AI-related problems proliferate, public accountability mechanisms remain woefully underdeveloped. While companies are required to discuss AI-related risks in their public filings, that language tends to be generic, signaling little about the actual frequency or severity of incidents. Meanwhile, existing incident repositories like the OECD AI Incidents Monitor and the AI Incident Database are valuable for surveying the breadth of documented incidents, but are limited in large part because of factors including reliance on crowdsourcing and inconsistent standardization. This transparency gap means developers and regulators lack the data needed to systematically track AI incidents and produce more responsible products and services, while investors and institutions struggle to understand and quantify the AI risk landscape.
Building a framework to measure AI incident costs
To help address this knowledge gap, Oxford Economics partnered with HCLTech to draft a preliminary framework for assessing the nature, scope, and cost of AI incidents. Our work focused on the financial services sector in the US but can be tailored to apply to broad swaths of the global economy.
We conducted a review of existing reports, published articles, and other documentation of AI risk in the US financial sector. We then used an API to query the SEC’s EDGAR database of required filings by publicly traded companies. We also reviewed a variety of proposed frameworks and taxonomies for AI risk and cost from global researchers and leading universities.
Measuring AI Incidents and Impacts in the US Financial Sector
Read the reportBased on our research, we suggest a new template for categorizing incident types and their impacts in the US financial sector to better assess direct and indirect costs. Our taxonomy covers several types of AI incidents along with their causes, impacts, and relevant metadata. For example, an AI incident might involve errors in decision-support and risk assessment, or problems with trading execution, among others; causes could include (but are not limited to) technical problems or bad data; the list of impacts starts with financial hits, both direct and indirect; and metadata includes the type of AI in use (e.g., a Large Language Model) and the area of the business where it is deployed, among other key points.
While our framework has not yet been tested with direct outreach to industry professionals, it can help institutions understand their AI-related incidents and the underlying causes; identify existing or potential problems; assess the operational, financial, and regulatory impacts; and track incidents with better precision.
Speak to us
Connect with our Thought Leadership team. Leverage our global expertise and data-driven insights to uncover strategic, high-impact narratives that help executives lead more sustainably and drive profitability.
Tags: